CN115760898A - World coordinate positioning method for road sprinklers in mixed Gaussian domain - Google Patents
World coordinate positioning method for road sprinklers in mixed Gaussian domain Download PDFInfo
- Publication number
- CN115760898A CN115760898A CN202211540766.XA CN202211540766A CN115760898A CN 115760898 A CN115760898 A CN 115760898A CN 202211540766 A CN202211540766 A CN 202211540766A CN 115760898 A CN115760898 A CN 115760898A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- dimensional
- shadow
- time
- cloud data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 57
- 238000009826 distribution Methods 0.000 claims abstract description 52
- 238000001514 detection method Methods 0.000 claims abstract description 27
- 238000012216 screening Methods 0.000 claims abstract description 4
- 239000011159 matrix material Substances 0.000 claims description 29
- 238000012546 transfer Methods 0.000 claims description 14
- 230000035945 sensitivity Effects 0.000 claims description 12
- 238000004364 calculation method Methods 0.000 claims description 11
- 230000008569 process Effects 0.000 claims description 10
- 230000011218 segmentation Effects 0.000 claims description 9
- 230000009466 transformation Effects 0.000 claims description 7
- 238000012545 processing Methods 0.000 claims description 4
- 230000008859 change Effects 0.000 claims description 3
- 230000000877 morphologic effect Effects 0.000 claims description 3
- 238000013519 translation Methods 0.000 claims description 3
- 230000007797 corrosion Effects 0.000 claims description 2
- 238000005260 corrosion Methods 0.000 claims description 2
- 239000000203 mixture Substances 0.000 claims 4
- 230000008030 elimination Effects 0.000 claims 1
- 238000003379 elimination reaction Methods 0.000 claims 1
- 238000012935 Averaging Methods 0.000 abstract 1
- 238000005286 illumination Methods 0.000 abstract 1
- 230000004927 fusion Effects 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 3
- 238000010276 construction Methods 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 206010039203 Road traffic accident Diseases 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000010339 dilation Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000003628 erosive effect Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012847 principal component analysis method Methods 0.000 description 1
Images
Landscapes
- Image Processing (AREA)
Abstract
Description
技术领域technical field
本发明属于交通安全管理技术领域,具体涉及一种混合高斯域下道路抛洒物的世界坐标定位方法。The invention belongs to the technical field of traffic safety management, and in particular relates to a world coordinate positioning method for road spills in a mixed Gaussian domain.
背景技术Background technique
随着高速公路建设的快速发展,道路车流量大幅增多,因抛洒物引发的事故数量激增。高速公路作为现代社会交通的重要组成部分,其往来车流量巨大,且近年来安全事故频发,小小的道路抛洒物往往造成巨大的事故。因此急需一种可以置于车身之上的高效的道路信息采集和检测系统,对传感器采集到的信息进行分拣与判断,能够快速的跟踪道路抛物并且能够实现世界坐标下的精确定位,作为检测公路抛洒物并对其进行定位的有效手段,及时做出应对措施,降低抛物对后续车辆的破坏,保障司乘的生命安全,减少交通事故的发生,节约大量人力且提高效率,顺应公路管理从粗放化向信息化的发展趋势。With the rapid development of expressway construction, road traffic volume has increased significantly, and the number of accidents caused by spilled objects has increased sharply. As an important part of modern social traffic, highways have a huge traffic flow, and safety accidents have occurred frequently in recent years, and small road spills often cause huge accidents. Therefore, there is an urgent need for an efficient road information collection and detection system that can be placed on the vehicle body to sort and judge the information collected by the sensor, to quickly track the road parabola and to achieve precise positioning in world coordinates, as a It is an effective means of locating and throwing objects on the road, taking timely countermeasures, reducing the damage of throwing objects to subsequent vehicles, ensuring the safety of drivers and passengers, reducing the occurrence of traffic accidents, saving a lot of manpower and improving efficiency, and conforming to the road management The development trend from extensive to informatization.
作为使用最为广泛且数据获取便利的两种传感器,摄像头和激光雷达能够采集道路的各类信息。而现今道路抛物的检测与跟踪,多为采用YOLO等检测算法对抛洒物出现帧进行抛物检测定位,采用传统的kalman、mean-shift、管道等传统的跟踪算法或基于深度学习的目标跟踪算法等方式进行跟踪。实际情况中,道路车流量庞大,路况极为复杂,车辆所配备的视镜和窗,以光的反射、折射等方式间接促成极为复杂的检测环境,传统的跟踪方式难以适应如此复杂变化,极易造成目标丢失与跟踪目标偏移等问题,因此需要采用基于深度学习的目标跟踪算法对目标加以有效跟踪。As the two most widely used and convenient data acquisition sensors, cameras and lidar can collect various information on the road. Nowadays, the detection and tracking of road parabolas mostly use YOLO and other detection algorithms to detect and locate the parabolic objects in the frame of the spilled objects, and use traditional tracking algorithms such as kalman, mean-shift, pipeline, etc. or target tracking algorithms based on deep learning, etc. way to track. In the actual situation, the road traffic is huge and the road conditions are extremely complex. The mirrors and windows equipped with vehicles indirectly contribute to an extremely complex detection environment through light reflection and refraction. Traditional tracking methods are difficult to adapt to such complex changes. Therefore, it is necessary to use a deep learning-based target tracking algorithm to effectively track the target.
发明内容Contents of the invention
本发明针对上述缺陷,提供一种混合高斯域下道路抛洒物的世界坐标定位方法。本发明可以有效提高在复杂光照环境下抛洒物检测的准确率,有效改善复杂光照环境下抛洒物跟踪漂移问题。相较于传统跟踪算法,本发明可以有效提升小目标的跟踪能力。In view of the above defects, the present invention provides a world coordinate positioning method for road spills in a mixed Gaussian domain. The invention can effectively improve the detection accuracy of the spilled objects in the complex lighting environment, and effectively improve the tracking and drifting problem of the scattered objects in the complex lighting environment. Compared with traditional tracking algorithms, the present invention can effectively improve the tracking ability of small targets.
本发明提供如下技术方案:一种混合高斯域下道路抛洒物的世界坐标定位方法所述方法包括以下步骤:The present invention provides the following technical solution: a method for positioning world coordinates of road spills in a mixed Gaussian domain. The method includes the following steps:
S1、通过摄像头采集图像,使用具有多个高斯分布的混合高斯分布模型确认采集到的图像中每个像素点的前背景归属;不断更新并进行阴影像素点检测,提取出所需的前景,并对其进行腐蚀、膨胀的闭操作的形态学处理,获取较为完整的前景;S1. Collect images through the camera, use a mixed Gaussian distribution model with multiple Gaussian distributions to confirm the foreground and background attribution of each pixel in the collected image; continuously update and detect shadow pixels, extract the required foreground, and Perform morphological processing of corrosion and expansion closed operations to obtain a relatively complete prospect;
S2、使用像素均值法分析采集到的t时刻实时图像处的轮廓类别,便于分析其是前景还是阴影轮廓;S2, use the pixel average method to analyze the contour category at the real-time image at time t collected, so as to analyze whether it is a foreground or a shadow contour;
S3、引入Matchshape算子,利用Hu矩阵同时具有的平移、缩放和旋转不变性,匹配目标轮廓与阴影轮廓;对出现物影匹配成功的进行物影轮廓预存,更新筛选;并将匹配结果的疑似抛洒物影轮廓模板与姿态初始化保存;S3. Introduce the Matchshape operator, use the translation, scaling and rotation invariance of the Hu matrix to match the target contour and shadow contour; pre-store the shadow contour for the successful object shadow matching, and update the screening; and the suspected matching result Throwing object shadow outline template and posture initialization and saving;
S4、经过所述S3步骤后得到完成定位的疑似抛洒物,采用所述S2步骤的方法对新进前景做阴影剔除,获取前景目标;使用模板对前景中的目标进行物影匹配,完成跟踪,并输出跟踪定位得到的抛洒物坐标;S4. After the S3 step, the suspected spilled object that has been positioned is obtained, and the method of the S2 step is used to remove the shadow of the new foreground to obtain the foreground target; use the template to perform shadow matching on the target in the foreground, and complete the tracking. And output the coordinates of the spilled objects obtained by tracking and positioning;
S5、采用激光雷达采集抛洒物的三维点云数据,建立三维KD树将采集到的无序三维点云数据整理形成有序三维点云数据,然后计算欧氏距离对形成的有序三维点云数据进行聚类,然后通过矩阵变换的方式将经过聚类后的有序三维点云数据转换为二维平面内抛洒物坐标,将转换后的二维平面抛洒物坐标与所述S4步骤输出的抛洒物坐标进行融合,完成道路抛洒物的世界坐标定位。S5. Use lidar to collect 3D point cloud data of spilled objects, establish a 3D KD tree to organize the collected disordered 3D point cloud data into ordered 3D point cloud data, and then calculate the ordered 3D point cloud formed by Euclidean distance pairs The data is clustered, and then the ordered three-dimensional point cloud data after the clustering is converted into the coordinates of the spilled objects in the two-dimensional plane by matrix transformation, and the coordinates of the scattered objects in the two-dimensional plane after the conversion are combined with the output of the S4 step. The coordinates of the sprinkled objects are fused to complete the world coordinate positioning of the sprinkled objects on the road.
进一步地,所述S1步骤中的多个高斯分布的混合高斯分布模型如下:Further, the mixed Gaussian distribution model of multiple Gaussian distributions in the S1 step is as follows:
其中,M为混合高斯分布模型的高斯分布个数,χT为从t时刻开始在时间窗口T内的采集样本集,χT={x(t),...,x(t-T)},χT用于表示在时间窗口T内不断增加新样本更新背景模型,以适应复杂环境变化;BG与FG分别表示背景分量与前景分量;πm表示M个混合模型的权重;η为高斯分布的概率密度函数;与分别为t时刻像素点的均值与协方差矩阵,其中为高斯分布方差估计值,I为单位矩阵。Wherein, M is the number of Gaussian distributions of the mixed Gaussian distribution model, χ T is the collection sample set in the time window T from time t, χ T = {x (t) , ..., x (tT) }, χ T is used to indicate that new samples are continuously added to update the background model within the time window T to adapt to complex environmental changes; BG and FG represent the background component and foreground component respectively; π m represents the weight of M mixed models; η is the Gaussian distribution Probability density function; and are the mean value and covariance matrix of pixels at time t, respectively, where is the estimated value of Gaussian distribution variance, and I is the identity matrix.
进一步地,在所述S1步骤的所述混合高斯分布模型不断更新过程中,对图像进行阴影像素点检测,包括以下步骤:Further, during the continuous update process of the mixed Gaussian distribution model in the S1 step, performing shadow pixel detection on the image includes the following steps:
S11、构建阴影像素点St(x,y)的判别式:S11. Construct the discriminant of the shadow pixel point S t (x, y):
|It(x,y)H-Bt(x,y)H|≤τH |I t (x, y)HB t (x, y)H|≤τ H
|It(x,y)S-Bt(x,y)S|≤τS |I t (x, y)SB t (x, y)S|≤τ S
其中,It(x,y)为所述S1步骤采集到的t时刻实时图像;Bt(x,y)为所述S1步骤采集到的t时刻实时图像的B分量,H、S和V分别为所述S1步骤采集到的t时刻图像RGB转换至HSV空间的HSV分量;α为用于阴影检测的亮度敏感度第一调节参数,β为用于阴影检测的亮度敏感度第二调节参数,τH为用于阴影检测的噪声敏感度第一调节参数,τS为用于阴影检测的噪声敏感度第二调节参数;Wherein, I t (x, y) is the t time real-time image collected by the S1 step; B t (x, y) is the B component of the t time real-time image collected by the S1 step, H, S and V Respectively, the HSV components of the image RGB collected in the step S1 at time t converted to HSV space; α is the first adjustment parameter of brightness sensitivity for shadow detection, and β is the second adjustment parameter of brightness sensitivity for shadow detection , τ H is the first adjustment parameter of noise sensitivity for shadow detection, and τ S is the second adjustment parameter of noise sensitivity for shadow detection;
S12、判断被检测图像中的各个像素点是否符合所述步骤S21构建的阴影像素点判别式,若符合,则标记为阴影像素点。S12. Determine whether each pixel in the detected image conforms to the shaded pixel discriminant established in step S21, and if so, mark it as a shaded pixel.
进一步地,所述S1步骤中不断更新多个高斯分布的混合高斯分布模型的更新方程如下:Further, in the S1 step, the update equation of the mixed Gaussian distribution model of multiple Gaussian distributions is continuously updated as follows:
其中,为中心向量;为t时刻实时图像的像素符合分布模型的归属因子,最符合的分布模型归属为1,其余模型为0;为指数级下降的包络曲线,用以限制旧数据的影响;in, is the center vector; is the pixel of the real-time image at time t The attribution factor conforming to the distribution model, the most conforming distribution model is attributed to 1, and the other models are assigned to 0; is an exponentially decreasing envelope curve to limit the influence of old data;
不断更新多个高斯分布的混合高斯分布模型中,判断t时刻实时图像的像素的马氏距离是否小于三倍的标准差用于判别该像素是否符合现有高斯分布,判断t时刻实时图像的像素的马氏距离的计算公式如下:In the mixed Gaussian distribution model that continuously updates multiple Gaussian distributions, determine the pixel of the real-time image at time t Mahalanobis distance Whether the standard deviation is less than three times is used to judge whether the pixel conforms to the existing Gaussian distribution, and to judge the pixel of the real-time image at time t Mahalanobis distance The calculation formula is as follows:
σm为高斯分布的方差。 σ m is the variance of the Gaussian distribution.
进一步地,所述S2步骤中采用像素均值法分析采集到的t时刻实时图像处的轮廓类别的判别式为:Further, in the S2 step, the discriminant formula for analyzing the profile category at the real-time image at time t collected by using the pixel mean method is:
其中,I为采集到的t时刻实时图像的像素点总数,i为采集到的t时刻实时图像的第i个像素点,bw为目标像素值;Wherein, I is the total number of pixels of the real-time image collected at time t, i is the i-th pixel of the real-time image collected at time t, and b is the target pixel value;
当P>191时,则表明采集到的t时刻实时图像中目标前景点多于阴影点,则判断采集到的t时刻实时图像的轮廓目标为前景轮廓。When P>191, it indicates that the target foreground points in the collected real-time image at time t are more than shadow points, and it is judged that the contour target of the collected real-time image at time t is the foreground contour.
进一步地,所述S5步骤中建立三维KD树将采集到的无序三维点云数据整理形成有序三维点云数据包括以下步骤:Further, in the step S5, establishing a three-dimensional KD tree and organizing the collected disordered three-dimensional point cloud data to form ordered three-dimensional point cloud data include the following steps:
S51、将激光雷达采集到的维度为三维的无序点云数据T={p1,p2,p3},其中进行初始化分割轴,对每个维度的点云位置数据进行方差和的计算,取最大方差的维度作为分割超平面,标记为r;S51. Collect the three-dimensional unordered point cloud data T={p 1 , p 2 , p 3 } collected by the lidar, where Initialize the segmentation axis, calculate the variance sum of the point cloud position data in each dimension, take the dimension with the largest variance as the segmentation hyperplane, and mark it as r;
S52、当前点云位置数据按分割超平面维度进行检索,找到中位数数据,并将其放入到当前节点上,根结点对应于包含T的三维空间的超平面区域;S52. The current point cloud position data is retrieved according to the segmentation hyperplane dimension, the median data is found, and put into the current node, and the root node corresponds to the hyperplane area of the three-dimensional space containing T;
S53、将当前超平面维度分割为两个子超平面维度,所有小于中位数的值划分到左支子超平面维度中,所有大于等于中位数的值划分到右支子超平面维度中;S53. Divide the current hyperplane dimension into two sub-hyperplane dimensions, divide all values smaller than the median into the left branch hyperplane dimension, and divide all values greater than or equal to the median into the right branch hyperplane dimension;
S54、更新所述S51步骤得到的分割超平面r,将划入子超平面维度的点保存在当前的根节点上,更新所述S51步骤得到的分割超平面r的公式如下:r=(r+1)%3,%为取余计算;S54, update the segmented hyperplane r obtained in the S51 step, store the points that are included in the sub-hyperplane dimension on the current root node, update the formula for the segmented hyperplane r obtained in the S51 step: r=(r +1)%3, % is the remainder calculation;
S55:在所述S53步骤得到的左支子超平面维度的数据中重复进行所述步骤S52直至子维度中无数据,确定得到左节点;在所述S53步骤得到的右支子超平面维度的数据中重复进行所述步骤S52,直至子维度中无数据,确定得到右节点;S55: Repeat step S52 in the data of the left branch hyperplane dimension obtained in the step S53 until there is no data in the subdimension, and determine to obtain the left node; the right branch hyperplane dimension obtained in the step S53 The step S52 is repeated in the data until there is no data in the subdimension, and the right node is determined to be obtained;
S56:输出点云对应的KD树,根据所述点云对应的KD树对采集到的无序三维点云数据整理形成有序三维点云数据。S56: Outputting a KD tree corresponding to the point cloud, sorting the collected unordered 3D point cloud data to form ordered 3D point cloud data according to the KD tree corresponding to the point cloud.
进一步地,所述S5步骤中计算欧氏距离对形成的有序三维点云数据进行聚类的欧氏距离计算公式如下:Further, the Euclidean distance calculation formula for clustering the formed ordered three-dimensional point cloud data in the step S5 is as follows:
其中,xi为有序三维点云数据中的第i个点云数据的横坐标,yi为有序三维点云数据中的第i个点云数据的纵坐标,n为有序三维点云数据中点云数据的总个数。Among them, xi is the abscissa of the i-th point cloud data in the ordered 3D point cloud data, y i is the ordinate of the i-th point cloud data in the ordered 3D point cloud data, and n is the ordered 3D point The total number of point cloud data in the cloud data.
进一步地,所述S5步骤中,通过矩阵变换的方式将经过聚类后的有序三维点云数据转换为二维平面内抛洒物坐标包括以下步骤:Further, in the step S5, converting the ordered three-dimensional point cloud data after clustering into two-dimensional in-plane sprinkler coordinates by matrix transformation includes the following steps:
1)、构建转移矩阵M,将激光雷达采集到的目标抛洒物的点云空间内的三维坐标(X,Y,Z)转移为目标抛洒物的二维平面内的坐标(u,v):1) Construct the transfer matrix M, and transfer the three-dimensional coordinates (X, Y, Z) in the point cloud space of the target spills collected by the lidar to the coordinates (u, v) in the two-dimensional plane of the target spills:
其中,mab为转移矩阵M中的第a行第b列转移参数,a=1,2,3;b=1,2,3,4;ZC为摄像头所在相机坐标系的z轴坐标;Wherein, m ab is the transfer parameter of row a and column b in the transfer matrix M, a=1,2,3; b=1,2,3,4; Z C is the z-axis coordinate of the camera coordinate system where the camera is located;
2)、根据所述转移矩阵计算目标抛洒物的二维平面内的坐标(u,v)的公式如下:2), the formula for calculating the coordinates (u, v) in the two-dimensional plane of the target shed according to the transfer matrix is as follows:
3)、经过所述步骤2)的计算,得到一系列的线性方程,解得标定参数,进而得到目标抛洒物的二维平面内的坐标(u,v)。3) After the calculation in the step 2), a series of linear equations are obtained, calibration parameters are solved, and then the coordinates (u, v) in the two-dimensional plane of the target spill are obtained.
进一步地,所述步骤1)中的点云空间内的三维坐标(X,Y,Z)为将采集到的三维数据依据体素网格进行体素划分,创建一个体积为1立方厘米的三维体素栅格,容纳后每个体素内用体素中所有点的质心来近似显示体素中其他点,其质心计算如下:Further, the three-dimensional coordinates (X, Y, Z) in the point cloud space in the step 1) are to divide the collected three-dimensional data into voxels according to the voxel grid, and create a three-dimensional space with a volume of 1 cubic centimeter. Voxel grid, each voxel uses the centroid of all points in the voxel to approximate other points in the voxel after being accommodated, and its centroid is calculated as follows:
其中,m为被检测目标物的三维坐标体素化的体素内包含体素点的总数,xcentral、ycentral和zcentral为质心的三维x轴坐标、y轴坐标和z轴坐标,则检测目标物的三维坐标体素化的体素内所有点都用该质心最终表示;点云空间内的三维坐标(X,Y,Z)中X=xcentral,Y=ycentral,Z=zcentral。Among them, m is the total number of voxel points contained in the voxel of the three-dimensional coordinates of the detected object, x central , y central and z central are the three-dimensional x-axis coordinates, y-axis coordinates and z-axis coordinates of the centroid, then All points in the voxelized voxel of the detected target object are finally represented by the centroid; in the three-dimensional coordinates (X, Y, Z) in the point cloud space, X=x central , Y=y central , Z=z central .
本发明具有以下有益效果:The present invention has the following beneficial effects:
1、本方法可以有效提高在复杂光照环境下抛洒物检测的准确率。1. This method can effectively improve the accuracy of detection of spilled objects in complex lighting environments.
2、本方法可以有效改善复杂光照环境下抛洒物跟踪漂移问题。2. This method can effectively improve the problem of tracking and drifting of spilled objects in complex lighting environments.
3、相较于传统跟踪算法,本发明提供的方法可以有效提升小目标的跟踪能力。3. Compared with traditional tracking algorithms, the method provided by the present invention can effectively improve the tracking ability of small targets.
4、本发明提供的一种混合高斯域下道路抛洒物的世界坐标定位方法。其通过利用摄像头的物影匹配跟踪算法实现实时识别抛洒物,并进行轨迹跟踪,激光雷达的即时定位与地图构建计算技术结合体素滤波算法与欧式聚类算法,实时采集处理公路数据,并通过矩阵变换,通过传感器融合实现抛洒物从二维到三维世界坐标系的精确定位,为高速抛洒物事故做好预警,防止二次事故的发生,减少不必要的损失。4. The present invention provides a method for positioning the world coordinates of road spills in the mixed Gaussian domain. It realizes real-time identification of spilled objects and trajectory tracking by using the camera's shadow matching and tracking algorithm. The real-time positioning and map construction computing technology of lidar combines voxel filtering algorithm and European clustering algorithm to collect and process road data in real time, and through Matrix transformation, through sensor fusion, realizes precise positioning of spilled objects from two-dimensional to three-dimensional world coordinate system, and provides early warning for high-speed spilled objects, prevents secondary accidents, and reduces unnecessary losses.
5、本发明提供的混合高斯域系道路抛洒物的世界坐标定位方法,通过摄像头采集图像并进行目标定位后,采用激光雷达采集三维点云数据进行数据融合的技术方案,避免了在实际情况之中,单一传感器不可避免存在的局限性,避免了仅靠摄像头单一传感器容易出现不精确,且没有深度信息,视场角有限的情况。因此通过摄像头采集图像进行目标定位后,采用激光雷达采集三维点云数据进行数据融合,进一步提高系统的鲁棒性,采取多传感器融合的方案,融合不同传感器的时间同步和空间同步提高了道路抛洒物的定位准确性。5. The world coordinate positioning method of mixed Gaussian domain system road spills provided by the present invention, after collecting images through the camera and performing target positioning, adopts the technical solution of collecting three-dimensional point cloud data by laser radar for data fusion, which avoids the need to be in the actual situation. Among them, the inevitable limitations of a single sensor avoid the situation that only a single camera sensor is prone to inaccuracy, no depth information, and a limited field of view. Therefore, after the camera collects images for target positioning, the laser radar is used to collect 3D point cloud data for data fusion to further improve the robustness of the system. A multi-sensor fusion scheme is adopted to integrate time synchronization and space synchronization of different sensors to improve road throwing. object positioning accuracy.
附图说明Description of drawings
在下文中将基于实施例并参考附图来对本发明进行更详细的描述。其中:Hereinafter, the present invention will be described in more detail based on the embodiments with reference to the accompanying drawings. in:
图1为本发明提供的方法中步骤S1-S4进行抛洒物坐标定位的流程图;Fig. 1 is the flow chart of step S1-S4 in the method provided by the present invention carries out the coordinate positioning of throwing object;
图2位本发明提供的方法中S5步骤中对无序三维点云数据建立三维KD树进行整理的流程示意图;Fig. 2 is a schematic flow diagram of setting up a three-dimensional KD tree for disordered three-dimensional point cloud data in S5 step in the method provided by the present invention;
图3为本发明提供的采用欧氏距离辅助三维KD树进行聚类的流程示意图。Fig. 3 is a schematic flowchart of clustering using Euclidean distance-assisted three-dimensional KD tree provided by the present invention.
具体实施方式Detailed ways
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some, not all, embodiments of the present invention. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.
本发明提供一种混合高斯域下道路抛洒物的世界坐标定位方法,所述方法包括以下步骤:The present invention provides a method for positioning world coordinates of road spills in a mixed Gaussian domain. The method includes the following steps:
S1、通过摄像头采集图像,使用具有多个高斯分布的混合高斯分布模型确认采集到的图像中每个像素点的前背景归属;不断更新并进行阴影像素点检测,提取出所需的前景,并对其进行腐蚀、膨胀的闭操作的形态学处理,有效地消除复杂环境产生下的噪声,突出图像的边沿锐化,以获取较为完整的前景;S1. Collect images through the camera, use a mixed Gaussian distribution model with multiple Gaussian distributions to confirm the foreground and background attribution of each pixel in the collected image; continuously update and detect shadow pixels, extract the required foreground, and The morphological processing of erosion and dilation closed operation can effectively eliminate the noise generated in the complex environment, and highlight the edge sharpening of the image to obtain a more complete foreground;
S2、使用像素均值法分析采集到的t时刻实时图像处的轮廓类别,便于分析其是前景还是阴影轮廓;S2, use the pixel average method to analyze the contour category at the real-time image at time t collected, so as to analyze whether it is a foreground or a shadow contour;
S3、引入Matchshape算子,利用Hu矩阵(离散图像归一化中心矩阵)平移、缩放和旋转不变性,匹配目标轮廓与阴影轮廓;对出现物影匹配成功的进行物影轮廓预存,更新筛选;并将匹配结果的疑似抛洒物影轮廓模板与姿态初始化保存;S3. Introduce the Matchshape operator, use the translation, scaling and rotation invariance of the Hu matrix (discrete image normalization center matrix) to match the target contour and the shadow contour; pre-store the shadow contour for the successful shadow matching, and update the screening; And initialize and save the silhouette template and posture of the suspected throwing shadow of the matching result;
S4、经过所述S3步骤后得到完成定位的疑似抛洒物,采用所述S2步骤的方法对新进前景做阴影剔除,获取前景目标;使用模板对前景中的目标进行物影匹配,完成跟踪,并输出跟踪定位得到的抛洒物坐标;S4. After the S3 step, the suspected spilled object that has been positioned is obtained, and the method of the S2 step is used to remove the shadow of the new foreground to obtain the foreground target; use the template to perform shadow matching on the target in the foreground, and complete the tracking. And output the coordinates of the spilled objects obtained by tracking and positioning;
S5、采用激光雷达采集抛洒物的三维点云数据,建立三维KD树将采集到的无序三维点云数据整理形成有序三维点云数据,然后计算欧氏距离对形成的有序三维点云数据进行聚类,然后通过矩阵变换的方式将经过聚类后的有序三维点云数据转换为二维平面内抛洒物坐标,将转换后的二维平面抛洒物坐标与所述S4步骤输出的抛洒物坐标进行融合,完成道路抛洒物的世界坐标定位。S5. Use lidar to collect 3D point cloud data of spilled objects, establish a 3D KD tree to organize the collected disordered 3D point cloud data into ordered 3D point cloud data, and then calculate the ordered 3D point cloud formed by Euclidean distance pairs The data is clustered, and then the ordered three-dimensional point cloud data after the clustering is converted into the coordinates of the spilled objects in the two-dimensional plane by matrix transformation, and the coordinates of the scattered objects in the two-dimensional plane after the conversion are combined with the output of the S4 step. The coordinates of the sprinkled objects are fused to complete the world coordinate positioning of the sprinkled objects on the road.
作为本发明的一个优选实施例,如图1所示,所述S1步骤中的多个高斯分布的混合高斯分布模型如下:As a preferred embodiment of the present invention, as shown in Figure 1, the mixed Gaussian distribution model of a plurality of Gaussian distributions in the S1 step is as follows:
其中,M为混合高斯分布模型的高斯分布个数,χT为从t时刻开始在时间窗口T内的采集样本集,χT={x(t),...,x(t-T)},χT用于表示在时间窗口T内不断增加新样本更新背景模型,以适应复杂环境变化;BG与FG分别表示背景分量与前景分量;πm表示M个混合模型的权重;η为高斯分布的概率密度函数;与分别为t时刻像素点的均值与协方差矩阵,其中为高斯分布方差估计值,I为单位矩阵。Wherein, M is the number of Gaussian distributions of the mixed Gaussian distribution model, χ T is the collection sample set in the time window T from time t, χ T = {x (t) , ..., x (tT) }, χ T is used to indicate that new samples are continuously added to update the background model within the time window T to adapt to complex environmental changes; BG and FG represent the background component and foreground component respectively; π m represents the weight of M mixed models; η is the Gaussian distribution Probability density function; and are the mean value and covariance matrix of pixels at time t, respectively, where is the estimated value of Gaussian distribution variance, and I is the identity matrix.
作为本发明的另一个优选实施例,为了优化所述混合高斯分布模型的背景更新模式,在所述S1步骤的所述混合高斯分布模型不断更新过程中,引入阴影检测模块,对图像进行阴影像素点检测,包括以下步骤:As another preferred embodiment of the present invention, in order to optimize the background update mode of the mixed Gaussian distribution model, during the continuous update process of the mixed Gaussian distribution model in the S1 step, a shadow detection module is introduced to perform shadow pixels on the image Point detection, including the following steps:
S11、构建阴影像素点St(x,y)的判别式:S11. Construct the discriminant of the shadow pixel point S t (x, y):
|It(x,y)H-Bt(x,y)H|≤τH |I t (x, y)HB t (x, y)H|≤τ H
|It(x,y)S-Bt(x,y)S|≤τS |I t (x, y)SB t (x, y)S|≤τ S
其中,It(x,y)为所述S1步骤采集到的t时刻实时图像;Bt(x,y)为所述S1步骤采集到的t时刻实时图像的B分量,H、S和V分别为所述S1步骤采集到的t时刻图像RGB转换至HSV空间的HSV分量;α为用于阴影检测的亮度敏感度第一调节参数,β为用于阴影检测的亮度敏感度第二调节参数,τH为用于阴影检测的噪声敏感度第一调节参数,τS为用于阴影检测的噪声敏感度第二调节参数;Wherein, I t (x, y) is the t time real-time image collected by the S1 step; B t (x, y) is the B component of the t time real-time image collected by the S1 step, H, S and V Respectively, the HSV components of the image RGB collected in the step S1 at time t converted to HSV space; α is the first adjustment parameter of brightness sensitivity for shadow detection, and β is the second adjustment parameter of brightness sensitivity for shadow detection , τ H is the first adjustment parameter of noise sensitivity for shadow detection, and τ S is the second adjustment parameter of noise sensitivity for shadow detection;
S12、判断被检测图像中的各个像素点是否符合所述步骤S21构建的阴影像素点判别式,若符合,则标记为阴影像素点。S12. Determine whether each pixel in the detected image conforms to the shaded pixel discriminant established in step S21, and if so, mark it as a shaded pixel.
阴影检测在色度分析上进行,适用于移动物体阴影检测,且对背景阴影具有抑制效果。Shadow detection is performed on chromaticity analysis, which is suitable for shadow detection of moving objects and has a suppressive effect on background shadows.
进一步优选地,所述S1步骤中不断更新多个高斯分布的混合高斯分布模型的更新方程如下:Further preferably, in the S1 step, the update equation of the mixed Gaussian distribution model of multiple Gaussian distributions is continuously updated as follows:
其中,为中心向量;为t时刻实时图像的像素符合分布模型的归属因子,最符合的分布模型归属为1,其余模型为0;为指数级下降的包络曲线,用以限制旧数据的影响;in, is the center vector; is the pixel of the real-time image at time t The attribution factor conforming to the distribution model, the most conforming distribution model is attributed to 1, and the other models are assigned to 0; is an exponentially decreasing envelope curve to limit the influence of old data;
不断更新多个高斯分布的混合高斯分布模型中,判断t时刻实时图像的像素的马氏距离是否小于三倍的标准差用于判别该像素是否符合现有高斯分布,判断t时刻实时图像的像素的马氏距离的计算公式如下:In the mixed Gaussian distribution model that continuously updates multiple Gaussian distributions, determine the pixel of the real-time image at time t Mahalanobis distance Whether the standard deviation is less than three times is used to judge whether the pixel conforms to the existing Gaussian distribution, and to judge the pixel of the real-time image at time t Mahalanobis distance The calculation formula is as follows:
σm为高斯分布的方差。 σ m is the variance of the Gaussian distribution.
作为本发明的另一个优选实施例,所述S2步骤中采用像素均值法分析采集到的t时刻实时图像处的轮廓类别的判别式为:As another preferred embodiment of the present invention, in the S2 step, the discriminant formula for analyzing the profile category at the real-time image at time t collected by using the pixel mean method is:
其中,I为采集到的t时刻实时图像的像素点总数,i为采集到的t时刻实时图像的第i个像素点,bw为目标像素值;Wherein, I is the total number of pixels of the real-time image collected at time t, i is the i-th pixel of the real-time image collected at time t, and b is the target pixel value;
当P>191时,则表明采集到的t时刻实时图像中目标前景点多于阴影点,则判断采集到的t时刻实时图像的轮廓目标为前景轮廓。When P>191, it indicates that the target foreground points in the collected real-time image at time t are more than shadow points, and it is judged that the contour target of the collected real-time image at time t is the foreground contour.
作为本发明的另一个优选实施例,如图2所示,所述S5步骤中建立三维KD树将采集到的无序三维点云数据整理形成有序三维点云数据包括以下步骤:As another preferred embodiment of the present invention, as shown in Figure 2, in the step S5, establishing a three-dimensional KD tree to organize the collected disordered three-dimensional point cloud data into ordered three-dimensional point cloud data includes the following steps:
S51、将激光雷达采集到的维度为三维的无序点云数据T={p1,p2,p3},其中进行初始化分割轴,对每个维度的点云位置数据进行方差和的计算,取最大方差的维度作为分割超平面,标记为r;S51. Collect the three-dimensional unordered point cloud data T={p 1 , p 2 , p 3 } collected by the lidar, where Initialize the segmentation axis, calculate the variance sum of the point cloud position data in each dimension, take the dimension with the largest variance as the segmentation hyperplane, and mark it as r;
S52、当前点云位置数据按分割超平面维度进行检索,找到中位数数据,并将其放入到当前节点上,根结点对应于包含T的三维空间的超平面区域;S52. The current point cloud position data is retrieved according to the segmentation hyperplane dimension, the median data is found, and put into the current node, and the root node corresponds to the hyperplane area of the three-dimensional space containing T;
S53、将当前超平面维度分割为两个子超平面维度,所有小于中位数的值划分到左支子超平面维度中,所有大于等于中位数的值划分到右支子超平面维度中;S53. Divide the current hyperplane dimension into two sub-hyperplane dimensions, divide all values smaller than the median into the left branch hyperplane dimension, and divide all values greater than or equal to the median into the right branch hyperplane dimension;
S54、更新所述S51步骤得到的分割超平面r,将划入子超平面维度的点保存在当前的根节点上,更新所述S51步骤得到的分割超平面r的公式如下:r=(r+1)%3,%为取余计算;S54, update the segmented hyperplane r obtained in the S51 step, store the points that are included in the sub-hyperplane dimension on the current root node, update the formula for the segmented hyperplane r obtained in the S51 step: r=(r +1)%3, % is the remainder calculation;
S55:在所述S53步骤得到的左支子超平面维度的数据中重复进行所述步骤S52直至子维度中无数据,确定得到左节点;在所述S53步骤得到的右支子超平面维度的数据中重复进行所述步骤S52,直至子维度中无数据,确定得到右节点;S55: Repeat step S52 in the data of the left branch hyperplane dimension obtained in the step S53 until there is no data in the subdimension, and determine to obtain the left node; the right branch hyperplane dimension obtained in the step S53 The step S52 is repeated in the data until there is no data in the subdimension, and the right node is determined to be obtained;
S56:输出点云对应的KD树,根据所述点云对应的KD树对采集到的无序三维点云数据整理形成有序三维点云数据。S56: Outputting a KD tree corresponding to the point cloud, sorting the collected unordered 3D point cloud data to form ordered 3D point cloud data according to the KD tree corresponding to the point cloud.
进一步优选地,如图3所示,所述S5步骤中计算欧氏距离对形成的有序三维点云数据进行聚类的欧氏距离计算公式如下:Further preferably, as shown in FIG. 3, the Euclidean distance calculation formula for clustering the formed ordered three-dimensional point cloud data in the step S5 is as follows:
其中,xi为有序三维点云数据中的第i个点云数据的横坐标,yi为有序三维点云数据中的第i个点云数据的纵坐标,n为有序三维点云数据中点云数据的总个数。Among them, xi is the abscissa of the i-th point cloud data in the ordered 3D point cloud data, y i is the ordinate of the i-th point cloud data in the ordered 3D point cloud data, and n is the ordered 3D point The total number of point cloud data in the cloud data.
作为本发明的另一个优选实施例,所述S5步骤中,通过矩阵变换的方式将经过聚类后的有序三维点云数据转换为二维平面内抛洒物坐标包括以下步骤:As another preferred embodiment of the present invention, in the step S5, converting the ordered three-dimensional point cloud data after clustering into two-dimensional in-plane sprinkler coordinates through matrix transformation includes the following steps:
1)、构建转移矩阵M,将激光雷达采集到的目标抛洒物的点云空间内的三维坐标(X,Y,Z)转移为目标抛洒物的二维平面内的坐标(u,v):1) Construct the transfer matrix M, and transfer the three-dimensional coordinates (X, Y, Z) in the point cloud space of the target spills collected by the lidar to the coordinates (u, v) in the two-dimensional plane of the target spills:
其中,mab为转移矩阵M中的第a行第b列转移参数,a=1,2,3;b=1,2,3,4;ZC为摄像头所在相机坐标系的z轴坐标;Wherein, m ab is the transfer parameter of row a and column b in the transfer matrix M, a=1,2,3; b=1,2,3,4; Z C is the z-axis coordinate of the camera coordinate system where the camera is located;
2)、根据所述转移矩阵计算目标抛洒物的二维平面内的坐标(u,v)的公式如下:2), the formula for calculating the coordinates (u, v) in the two-dimensional plane of the target shed according to the transfer matrix is as follows:
3)、经过所述步骤2)的计算,得到一系列的线性方程,解得标定参数,进而得到目标抛洒物的二维平面内的坐标(u,v)。3) After the calculation in the step 2), a series of linear equations are obtained, calibration parameters are solved, and then the coordinates (u, v) in the two-dimensional plane of the target spill are obtained.
通过激光雷达采集三维点云数据,读取激光雷达获取到的目标抛洒物的点云空间内的三维坐标(X,Y,Z),通过矩阵变化得到对应的在图像平面内的目标抛洒物的二维平面内的坐标(u,v)。当判断计算出的图像像素坐标在摄像头读取到的图像内部时,读出图像像素(R,G,B),赋值给点云数据,形成基于RGB相机3D坐标系下的3D坐标彩色点云,实现摄像头与激光雷达的融合。Collect 3D point cloud data through lidar, read the 3D coordinates (X, Y, Z) in the point cloud space of the target sprinkler acquired by the lidar, and obtain the corresponding target sprinkler in the image plane through matrix change Coordinates (u, v) in a 2D plane. When it is judged that the calculated image pixel coordinates are inside the image read by the camera, the image pixels (R, G, B) are read out and assigned to the point cloud data to form a 3D coordinate color point cloud based on the RGB camera 3D coordinate system , to realize the integration of camera and lidar.
而激光雷达已将点云数据完成了ROI提取、地面分割、欧式聚类等操作,已生成物体矩阵定位框,图像处理已基本确定抛洒物的基本位置,则可定位其相对于车身的三维位置,完成定位,根据抛洒物所在车道于车身的拓扑关系,可为车辆的下一步驾驶路线提供线索。The laser radar has completed the ROI extraction, ground segmentation, European clustering and other operations on the point cloud data, and has generated the object matrix positioning frame. The image processing has basically determined the basic position of the scattered objects, and its three-dimensional position relative to the vehicle body can be located. , to complete the positioning, according to the topological relationship between the lane where the spilled object is located and the vehicle body, it can provide clues for the next driving route of the vehicle.
虽然经过S51-S56步骤可以将采集到的激光雷达数据已转换为有序数据组。而由于激光雷达每次采集障碍点时视角不同,采集的部分障碍点坐标变化较大,很多障碍点与障碍物的跟踪不相关,障碍点过多会影响外接框轮廓的提取,所以需要对原始点云进行筛选出感兴趣区域。本发明通过欧式聚类将激光雷达采集并通过KD树整理得到的无法较为准确地确定障碍物的有序点云数据,继续进行聚类处理,进而点云数据有规律地聚合成点集。因此通过ROI提取检索包含路面、路口的区域,同时通过基于平面栅格构建俯视图对数据进行地面分割,以留下路面上障碍物的点云数据。再通过主成分分析法寻找聚类完成的点集的三个主方向,求出质心,计算协方差,获得协方差矩阵,采用雅可比迭代法求取协方差矩阵的特征值和特征向量,特征向量即为主方向,将每个点的(X,Y,Z)坐标投影到计算出的坐标轴上,位置由累加所有点再求均值得到,求出中心点和半长度,生成最小的旋转矩形,形成沿着目标的主成分方向的包围框。Although the collected lidar data can be converted into an ordered data group after steps S51-S56. However, due to the different viewing angles when the lidar collects obstacle points each time, the coordinates of some of the collected obstacle points change greatly. The point cloud is filtered to select the region of interest. In the present invention, the orderly point cloud data collected by the laser radar and sorted out by the KD tree through the KD tree, continues the clustering process, and then the point cloud data is regularly aggregated into a point set. Therefore, ROI extraction is used to retrieve the area containing roads and intersections, and at the same time, the data is segmented by building a top view based on planar grids to leave point cloud data of obstacles on the road. Then use the principal component analysis method to find the three main directions of the clustered point set, find the centroid, calculate the covariance, and obtain the covariance matrix, and use the Jacobian iterative method to obtain the eigenvalues and eigenvectors of the covariance matrix. The vector is the main direction, and the (X, Y, Z) coordinates of each point are projected onto the calculated coordinate axis. The position is obtained by accumulating all points and then calculating the average value, and the center point and half length are calculated to generate the smallest rotation. A rectangle that forms a bounding box along the direction of the principal components of the object.
进一步优选地,所述步骤1)中的点云空间内的三维坐标(X,Y,Z)为将采集到的三维数据依据体素网格进行体素划分,创建一个体积为1立方厘米的三维体素栅格,容纳后每个体素内用体素中所有点的质心来近似显示体素中其他点,其质心计算如下:Further preferably, the three-dimensional coordinates (X, Y, Z) in the point cloud space in the step 1) are to divide the collected three-dimensional data into voxels according to the voxel grid, and create a volume of 1 cubic centimeter Three-dimensional voxel grid, after containing each voxel, the centroid of all points in the voxel is used to approximate other points in the voxel, and its centroid is calculated as follows:
其中,m为被检测目标物的三维坐标体素化的体素内包含体素点的总数,xcentral、ycentral和zcentral为质心的三维x轴坐标、y轴坐标和z轴坐标,则检测目标物的三维坐标体素化的体素内所有点都用该质心最终表示;点云空间内的三维坐标(X,Y,Z)中X=xcentral,Y=ycentral,Z=zcentral。Among them, m is the total number of voxel points contained in the voxel of the three-dimensional coordinates of the detected object, x central , y central and z central are the three-dimensional x-axis coordinates, y-axis coordinates and z-axis coordinates of the centroid, then All points in the voxelized voxel of the detected target object are finally represented by the centroid; in the three-dimensional coordinates (X, Y, Z) in the point cloud space, X=x central , Y=y central , Z=z central .
通过使用体素化网格方法实现下采样,即在不破坏点云几何结构功能的前提下,减少点的数量的同时保存点云的形状特征,去除一定程度的噪音点和离群点。但通过体素滤波后的点云数据仍为无序数据,且数据量依旧庞大,处理难度大,因而需要通过KD树,减少很多的时间消耗,同时确保点云的关联点寻找和配准处于实时的状态。Downsampling is achieved by using the voxel grid method, that is, without destroying the geometric structure of the point cloud, the number of points is reduced while the shape features of the point cloud are preserved, and a certain degree of noise points and outliers are removed. However, the point cloud data after voxel filtering is still unordered data, and the data volume is still huge, which is difficult to process. Therefore, it is necessary to use the KD tree to reduce a lot of time consumption, and at the same time ensure that the point cloud search and registration of associated points are in the real-time status.
需要说明的是,上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。并且本文中的术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、装置、物品或者方法不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、装置、物品或者方法所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、装置、物品或者方法中还存在另外的相同要素。It should be noted that the serial numbers of the above embodiments of the present invention are only for description, and do not represent the advantages and disadvantages of the embodiments. And herein the term "comprises", "comprises" or any other variation thereof is intended to cover a non-exclusive inclusion such that a process, apparatus, article or method comprising a set of elements includes not only those elements, but also includes the elements not expressly included. other elements listed, or also include elements inherent in the process, apparatus, article, or method. Without further limitations, an element defined by the phrase "comprising a ..." does not preclude the presence of additional same elements in the process, apparatus, article or method comprising the element.
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上所述的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本发明各个实施例所述的方法。Through the description of the above embodiments, those skilled in the art can clearly understand that the methods of the above embodiments can be implemented by means of software plus a necessary general-purpose hardware platform, and of course also by hardware, but in many cases the former is better implementation. Based on such an understanding, the technical solution of the present invention can be embodied in the form of a software product in essence or in other words, the part that contributes to the prior art, and the computer software product is stored in a storage medium (such as ROM/RAM) as described above. , magnetic disk, optical disk), including several instructions to enable a terminal device (which may be a mobile phone, computer, server, or network device, etc.) to execute the methods described in various embodiments of the present invention.
以上仅为本发明的优选实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。The above are only preferred embodiments of the present invention, and are not intended to limit the patent scope of the present invention. Any equivalent structure or equivalent process conversion made by using the description of the present invention and the contents of the accompanying drawings, or directly or indirectly used in other related technical fields , are all included in the scope of patent protection of the present invention in the same way.
Claims (9)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211540766.XA CN115760898A (en) | 2022-12-02 | 2022-12-02 | World coordinate positioning method for road sprinklers in mixed Gaussian domain |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211540766.XA CN115760898A (en) | 2022-12-02 | 2022-12-02 | World coordinate positioning method for road sprinklers in mixed Gaussian domain |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115760898A true CN115760898A (en) | 2023-03-07 |
Family
ID=85342839
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211540766.XA Pending CN115760898A (en) | 2022-12-02 | 2022-12-02 | World coordinate positioning method for road sprinklers in mixed Gaussian domain |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115760898A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117522824A (en) * | 2023-11-16 | 2024-02-06 | 安徽大学 | A multi-source domain generalized cloud and cloud shadow detection method based on domain knowledge base |
CN117975407A (en) * | 2024-01-09 | 2024-05-03 | 湖北鄂东长江公路大桥有限公司 | Road casting object detection method |
-
2022
- 2022-12-02 CN CN202211540766.XA patent/CN115760898A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117522824A (en) * | 2023-11-16 | 2024-02-06 | 安徽大学 | A multi-source domain generalized cloud and cloud shadow detection method based on domain knowledge base |
CN117522824B (en) * | 2023-11-16 | 2024-05-14 | 安徽大学 | A multi-source domain generalization method for cloud and cloud shadow detection based on domain knowledge base |
CN117975407A (en) * | 2024-01-09 | 2024-05-03 | 湖北鄂东长江公路大桥有限公司 | Road casting object detection method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Lin et al. | Color-, depth-, and shape-based 3D fruit detection | |
CN110781827B (en) | A road edge detection system and method based on lidar and fan-shaped space segmentation | |
CN108510467B (en) | SAR image target identification method based on depth deformable convolution neural network | |
US9846946B2 (en) | Objection recognition in a 3D scene | |
WO2022188663A1 (en) | Target detection method and apparatus | |
Cheng et al. | Outdoor scene image segmentation based on background recognition and perceptual organization | |
CN114488194B (en) | A method for target detection and recognition on structured roads for intelligent driving vehicles | |
CN111563442A (en) | Slam method and system for fusing point cloud and camera image data based on laser radar | |
CN113989784B (en) | A road scene type recognition method and system based on vehicle-mounted laser point cloud | |
AU2021249313A1 (en) | Feature extraction from mobile lidar and imagery data | |
Romdhane et al. | An improved traffic signs recognition and tracking method for driver assistance system | |
CN106650640A (en) | Negative obstacle detection method based on local structure feature of laser radar point cloud | |
JP2016018538A (en) | Image recognition device and method and program | |
Šegvić et al. | Exploiting temporal and spatial constraints in traffic sign detection from a moving vehicle | |
CN114359876B (en) | Vehicle target identification method and storage medium | |
CN113506318A (en) | A 3D object perception method in vehicle edge scene | |
CN115760898A (en) | World coordinate positioning method for road sprinklers in mixed Gaussian domain | |
CN111259796A (en) | A Lane Line Detection Method Based on Image Geometric Features | |
CN117949942B (en) | Target tracking method and system based on fusion of radar data and video data | |
CN106056631A (en) | Pedestrian detection method based on motion region | |
CN116109601A (en) | A real-time target detection method based on 3D lidar point cloud | |
CN116524219A (en) | A Method of Obstacle Detection Based on LiDAR Point Cloud Clustering | |
CN116704490B (en) | License plate recognition method, license plate recognition device and computer equipment | |
CN113281782A (en) | Laser radar snow point filtering method based on unmanned vehicle | |
CN113420648B (en) | Target detection method and system with rotation adaptability |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |