CN101975951B - Field environment barrier detection method fusing distance and image information - Google Patents
Field environment barrier detection method fusing distance and image information Download PDFInfo
- Publication number
- CN101975951B CN101975951B CN 201010195586 CN201010195586A CN101975951B CN 101975951 B CN101975951 B CN 101975951B CN 201010195586 CN201010195586 CN 201010195586 CN 201010195586 A CN201010195586 A CN 201010195586A CN 101975951 B CN101975951 B CN 101975951B
- Authority
- CN
- China
- Prior art keywords
- zone
- laser radar
- coordinate system
- point
- axle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Landscapes
- Traffic Control Systems (AREA)
Abstract
The invention belongs to the technical field of barrier detection, particularly relates to a field environment barrier detection method fusing distance and image information and aims to provide a method for detecting the common barriers under the field driving condition of an unmanned vehicle so as to plan a driving route for the vehicle and enhance the independent field driving capability of the unmanned vehicle. The method comprises the following steps of: establishing a mathematical model; detecting a laser radar distance data barrier; and fusing the image processing of a video camera and a result. By the method, the common barriers such as grasslands, roads, trees, bushes and the like under the field driving condition of the unmanned vehicle can be detected and recognized and color modeling is performed on a driving region, so that an abnormal part of the driving region is further detected. The environment around the vehicle can be partitioned into a non-driving region, the driving region, an unknown region and the like, so that the planning of the driving route of the vehicle is facilitated and the independent field driving capability of the unmanned vehicle is enhanced.
Description
Technical field
The invention belongs to the obstacle detection technical field, be specifically related to the field environment disorder detection method of a kind of fusion distance and image information.
Background technology
The detection of obstacles of automatic driving vehicle on travel is the important component part in the environment perception technology research field.In the application of obstacle detection, sensor commonly used has laser radar, video camera, millimetre-wave radar, ultrasonic sensor etc.Laser radar is by the measurement utilizing emitted light with from the mistiming measuring distance between the body surface reflected light.It can directly obtain range data, for vehicle provides convenient intuitively environment descriptor.The concrete application of laser radar has a lot of forms, the for example location of object and tracking, environmental modeling and keep away barrier, location and map structuring (SLAM), the classification of landform and morphologic characteristics etc. can also utilize the echo strength information of laser radar to carry out obstacle detection and tracking.The physical message of the surrounding environment that video camera obtains is abundant, and has higher disguise, therefore is widely used.No matter be single camera or the application of multiple-camera, a lot of researchs are arranged in the prior art.But camera review is subject to the impact of illumination, smog, environment easily, although the stereoscopic vision that multiple-camera forms can access actual range, the coupling between image is complicated, consuming time, has affected practical application effect.The millimetre-wave radar volume is little, and is lightweight, can survey distance, and the ability of penetrating fog, cigarette, dust is strong, but resolution and precision are lower, is mainly used in automobile collision preventing and controls brake system etc. based on the vehicle tracing system of spacing measuring ability with knocking into the back to alleviate.Ultrasonic sensor is transmitted into from transmitter and runs into the time that obstacle is reflected back receiver and find range by detecting ultrasound wave.The ultrasonic sensor volume is little cheap again, uses comparatively general, but owing to precision is low, can only be used in the occasion not high to the environment sensing accuracy requirement.
And Laser Radar Scanning data and optical imagery have very strong complementarity to the description of environment, can obtain rapidly and accurately the intensive three-dimensional coordinate of body surface such as laser scanning data, and optical imagery has comprised abundant color.Therefore, merge laser scanning data and can obtain the more comprehensive information of automatic driving vehicle running environment with optical imagery, improved the rapidity of obstacle detection and to the adaptive faculty of complex environment.About the information fusion problem of distance and image, there are many scholars to be studied.Peter merges the data of monocular-camera image and sonar ranging under indoor environment, has set up the indoor environment model, but has been not suitable for using under outdoor complicated field environment.The Tsai-hong Hong of University of Maryland tests the requirement that the unmanned vehicle lowered in field environment is travelled in the works for U.S. Demo III, proposed to merge the obstacle detection algorithm of three-dimensional laser radar data and camera review, be that road sign, pond and road are identified for three kinds of typical road conditions only, lack for example research identified of meadow, trees of other common obstacle in the environment.Monteiro uses the information after laser radar and video camera merge, and pedestrian and vehicle for detection of mobile on the road do not carry out Understanding to surrounding environment.Xiang Zhiyu has proposed obstacle detection method in a kind of thick grass of merging laser radar and video camera information.The method is divided into groups laser radar data first, determine barrier point, then be mapped in the camera review, with this be partitioned in the camera review can not running region and running region, reduce the erroneous judgement that occurs when pure laser radar is differentiated, and improved the integrality that final barrier profile detects.But when the color of barrier and thick grass very near the time, the effect of fusion detection can descend to some extent, and can not running region according to what judge, marks off the wheeled zone, lacks environment is understood more specifically.To sum up, prior art does not have and can plan driving path thereby be convenient to vehicle for automatic driving vehicle detects common obstacle under the driving conditions in the open air, improves the disorder detection method of the capacity of will that travels in the automatic driving vehicle field.
Summary of the invention
The purpose of this invention is to provide a kind of can the detection common obstacle under the driving conditions for automatic driving vehicle in the open air, thereby be convenient to vehicle planning driving path, improve the fusion distance of the capacity of will that travels in the automatic driving vehicle field and the field environment disorder detection method of image information.
The present invention is achieved in that
The field environment disorder detection method of a kind of fusion distance and image information comprises the steps:
The first step: set up mathematical model;
Second step: laser radar range data obstacle detection;
The 3rd step: camera review is processed;
The 4th step: the result merges.
Aforesaid foundation in the mathematical model step set up car body and laser radar, video camera and inertial navigation coordinate system, and described coordinate system is interrelated, specifically comprises the steps:
(1) sets up bodywork reference frame O
bX
bY
bZ
b
Initial point O
bBe positioned at certain point of fixity of car body, initial point be the laser radar light source with plane that ground overlaps on projection, X
bAxle points to car body front-right, Y
bAxle points to car body dead ahead, Z
bAxle points to the car body top; The bodywork reference frame initial point is h to the height on ground
0
(2) set up laser radar coordinate system O
lX
lY
lZ
l
Define this coordinate origin O
lBe positioned at the laser radar light source center, X
lAxle straight up, Y
lAxle horizontal to the right, Z
lAxle points to car body the place ahead;
(3) set up camera coordinate system O
cX
cY
cZ
c
Coordinate origin O
cBe positioned at the focus of camera lens, X
cAxle and plane of delineation horizontal direction parallel, direction is to the right; Y
cThe vertical direction of axle and the plane of delineation is parallel, and direction is downward; Z
cAxle is perpendicular to the plane of delineation, and is directed straight ahead;
(4) set up inertial navigation coordinate system O
IX
IY
IZ
I
The inertial navigation coordinate system is consistent with the bodywork reference frame direction;
(5) determine the coordinate system transformational relation, specifically comprise the steps:
(a) from laser radar coordinate system O
lX
lY
lZ
lBe transformed into bodywork reference frame O
bX
bY
bZ
b
First around Y
lThe axle half-twist is again around Z
lThe axle half-twist is then along Z
lThe downward translation h of direction of principal axis
0Distance;
Each meaning of parameters is same as described above in the formula;
(b) camera coordinate system is transformed into image coordinate system;
According to existing video camera pin-hole model, convert the three-dimensional coordinate of putting under the image coordinate system on the image location of pixels;
Wherein A is camera intrinsic parameter,
α
x, α
yRespectively the scale factor of u axle and v axle in the image coordinate system, u
0, v
0Be optical centre, γ is u axle and the v axle out of plumb factor, in a lot of situations, makes γ=0; Other meaning of parameters is same as described above in the formula;
(c) from laser radar coordinate system O
lX
lY
lZ
lBe transformed into camera coordinate system O
cX
cY
cZ
c
Wherein R and t are respectively rotation matrix and the translation vector that is transformed into camera coordinate system from the laser radar coordinate system; Other meaning of parameters is same as described above in the formula
(d) range data is proofreaied and correct;
The car body angle of pitch of supposing inertial navigation equipment output is that α, roll angle are γ, and the range data after the correction obtains by following formula:
Meaning of parameters is same as described above in the formula.
Aforesaid laser radar range data obstacle detection step comprises the steps:
(1) range data pre-service;
The laser radar range data is carried out filtering;
(2) Region Segmentation;
The three-dimensional point cloud that the Laser Radar Scanning environment obtains is cut apart, obtained a plurality of zones;
(3) zone identification;
Meadow, road, trees, bushes are carried out analyzing identification.
Aforesaid Region Segmentation step comprises the steps:
(a) retrieve certain three dimensions point p
iNearest k point in the field calculates by this k+1 and puts the point set Q that forms on every side
iNormal, as the normal vector of this analyzing spot;
(b) three dimensions is put p
iCoordinate
And normal
The composition characteristic vector
The cartesian space distance of two three-dimensional point is ρ
e(p
i, p
j)=‖ p
i-p
j‖, wherein, p
jCoordinate
p
jWith its normal
The composition characteristic vector
Angular distance is ρ
a(n
i, n
j)=0.5-0.5 *<n
i, n
j〉/‖ n
i‖ ‖ n
j‖, wherein,
For space length and angular distance arrange different threshold value ρ
E, max, ρ
A, maxWhen the space length between two points or angular distance during greater than respective threshold, think that these two points are not at the same area;
(c) to all consecutive point calculating space length ρ each other
eWith angle ρ
aDistance; If ρ
e≤ ρ
E, maxAnd ρ
a≤ ρ
A, max, with these two some Cluster mergings;
(d) if certain a bit can't with other cluster, a newly-built zone;
(e) with a cluster is arranged after, check every class to have a little quantity, if quantity is less than a certain threshold value n
Cmin, and the average height of putting in the class is far longer than the height h of vehicle leaping over obstacles
Max, such is noise, with its deletion.
In the aforesaid Region Segmentation step, the span of k is 10~30.
Aforesaid regional identification step specifically comprises the steps:
(a) zoning feature;
Calculate average height, singular value, regional Surface Method vector; Computing method are as follows:
Singular value: [σ
1σ
2σ
3]; The coordinate of having a few in this zone is formed matrix, carry out svd U ∑ V
T, the element in the ∑ is singular value, and with σ
1, σ
2, σ
3Descending ordering;
Zone Surface Method vector: γ; Minimum singular value σ
3Corresponding V
TIn vector of unit length;
(b) identification comprises the steps:
1. meadow identification;
When satisfying following condition, be identified as the meadow;
1) the some average height is lower than vehicle obstacle clearing capability h in the zone
Max
2) penetrance of point is high in the zone, and singular value is distributed as σ
1≈ σ
2≈ σ
3
3) field method vector γ upwards;
After identifying the meadow, these points are merged to the same area, and to mark this zone be wheeled zone;
2. road Identification;
When satisfying following condition, be identified as road:
2) penetrance of point is low in the zone, σ
1>σ
2>>σ
3
3) field method vector γ upwards;
3. trunk identification;
When the regularity of distribution of singular value is σ
1>>σ
2≈ σ
3, when field method vector γ is positioned at horizontal direction, be identified as trunk;
4. bushes identification;
When singular value is σ
1>σ
2>σ
3,, when field method vector γ also is positioned at horizontal direction, be identified as bushes.
In the aforesaid identification step, as singular value σ
1With σ
2Ratio σ
1/ σ
2In the time of in [1 4], σ
1≈ σ
2As singular value σ
1With σ
2Ratio σ
1/ σ
2Greater than 50 o'clock, singular value σ
1σ
2
Aforesaid camera review treatment step specifically comprises the steps:
(1) laser radar data in the zone that will divide out is transformed under the image coordinate system of video camera;
(2) the wheeled zone of preliminary judgement is further identified;
Specifically comprise the steps:
(a) set up gauss hybrid models;
The sequence of supposing pixel in the image is { x
1, x
2X
i, the value of i pixel is x
i=[R
i, G
i, B
i], the probability of this pixel is:
Wherein K is the number of Gaussian distribution, selects 3~5, ω
K, iThe estimated value of weight, μ
K, iThe average of k Gaussian distribution, ∑
K, iBe the covariance matrix of k Gaussian distribution, suppose that covariance matrix meets
Rule; The probability density of pixel i in k gauss of distribution function calculated by following formula:
μ
iBe the average of i Gaussian distribution, ∑ is the covariance matrix of Gaussian distribution;
(b) zone identification;
The image in the Gauss model of the wheeled field color set up and current zone to be determined is carried out the absolute value comparison, if | x
i-f| 〉=T, T are threshold value, and then this is non-wheeled region point.
Aforesaid as a result fusion steps specifically comprises the steps:
The differentiation result of laser radar and video camera is merged, judge wheeled zone and can not running region, constructing environment map; The differentiation result of comprehensive laser radar and video camera is divided into wheeled zone and can not running region with grating map, and the principle of division is:
(1) if it is can not running region that laser radar is differentiated the result, then grating map being divided into can not running region;
(2) if it is the wheeled zone that laser radar is differentiated the result, then needs to differentiate the result according to video camera and analyze;
(a) if it is the wheeled zone that video camera is differentiated the result, then be the wheeled zone;
(b) if it is can not running region that video camera is differentiated the result, then be can not running region;
(3) if the zone that exists laser radar and video camera all not to detect should the zone be zone of ignorance then.
The invention has the beneficial effects as follows:
The present invention can for automatic driving vehicle in the open air under the driving conditions common to obstacle such as meadow, road, trees, bushes etc. detect identification, and to the color modeling in wheeled zone, further detect the unusual part in wheeled zone.Vehicle-periphery can be marked off " can not running region ", " wheeled zone " and " zone of ignorance " etc., be conducive to vehicle planning driving path, the capacity of will that travels in raising automatic driving vehicle field.
Description of drawings
Fig. 1 is the process flow diagram of the field environment disorder detection method of a kind of fusion distance of the present invention and image information;
Fig. 2 is that the mutual alignment of bodywork reference frame of the present invention, laser radar coordinate system and camera coordinate system concerns synoptic diagram;
Fig. 3 of the present inventionly is transformed into camera coordinate system coordinate conversion synoptic diagram from the laser radar coordinate system.
Embodiment
Be introduced below in conjunction with the field environment disorder detection method of drawings and Examples to a kind of fusion distance of the present invention and image information:
As shown in Figure 1, the field environment disorder detection method of a kind of fusion distance and image information comprises the steps:
The first step: set up mathematical model;
When setting up the vehicle sensor model, need to set up car body and laser radar, video camera and inertial navigation coordinate system, and these coordinate systems are interrelated, specifically comprise the steps:
(1) sets up bodywork reference frame O
bX
bY
bZ
b
The bodywork reference frame that foundation and vehicle body are fixed together, it moves with vehicle.Bodywork reference frame is defined as: initial point O
bBe positioned at certain point of fixity of car body, to specify initial point be the laser radar light source with plane that ground overlaps on projection, X
bAxle points to car body front-right, Y
bAxle points to car body dead ahead, Z
bAxle points to the car body top.The bodywork reference frame initial point is h to the height on ground
0
(2) set up laser radar coordinate system O
lX
lY
lZ
l
Laser radar is installed on the The Cloud Terrace, and the laser that laser radar is launched can only scan 2-D data, swings by The Cloud Terrace and could realize 3-D scanning.Laser radar coordinate system and the laser radar of getting along well are connected this moment, but and The Cloud Terrace consider as a whole together, namely define this coordinate origin O
lBe positioned at the laser radar light source center, X
lAxle straight up, Y
lAxle horizontal to the right, Z
lAxle points to car body the place ahead.
(3) set up camera coordinate system O
cX
cY
cZ
c
The camera coordinate system that foundation and video camera are fixed together.Coordinate origin O
cBe positioned at the focus of camera lens, X
cAxle and plane of delineation horizontal direction parallel, direction is to the right; Y
cThe vertical direction of axle and the plane of delineation is parallel, and direction is downward; Z
cAxle is perpendicular to the plane of delineation, and is directed straight ahead.
(4) set up inertial navigation coordinate system O
IX
IY
IZ
I
Inertial navigation equipment is fixedly mounted on the car body.Set up the inertial navigation coordinate system consistent with the bodywork reference frame direction.The alignment error of inertial navigation equipment can be carried out when mounted sport car experiment and resolved out, and after alignment error was proofreaied and correct, the attitude angle of inertial navigation equipment output was exactly the attitude angle of car body.
When adopting laser radar data to carry out obstacle identification, the angle of pitch and the roll angle of the car body that needs use inertial navigation sensitivity arrives; By the structure rotation matrix, range data is corrected in the coordinate system of true origin on surface level.
The mutual alignment relation of bodywork reference frame, laser radar coordinate system and camera coordinate system as shown in Figure 2.
(5) determine the coordinate system transformational relation, specifically comprise the steps:
(a) from laser radar coordinate system O
lX
lY
lZ
lBe transformed into bodywork reference frame O
bX
bY
bZ
b
First around Y
lThe axle half-twist is again around Z
lThe axle half-twist is then along Z
lThe downward translation h of direction of principal axis
0Distance.
Can get:
(b) camera coordinate system is transformed into image coordinate system;
According to existing video camera pin-hole model, convert the three-dimensional coordinate of putting under the image coordinate system on the image location of pixels.
Wherein A is camera intrinsic parameter,
α
x, α
yRespectively the scale factor of u axle and v axle in the image coordinate system, u
0, v
0Be optical centre, γ is u axle and the v axle out of plumb factor, in a lot of situations, makes γ=0.A can be obtained by the plane target drone method that the people such as Zhang Zhengyou propose.
The camera marking method based on plane target drone according to the people such as Zhang Zhengyou propose requires video camera to take same plane target drone in orientation different more than two, and video camera and plane target drone can move freely, and need not to know kinematic parameter.In calibration process, the inner parameter of supposing video camera is constant, when different angles are taken target, only have external parameter to change, take the image of diverse location gridiron pattern target by video camera, extract the angle point of grid on the gridiron pattern target as unique point, set up the relation between target point and the corresponding diagram picture point, thereby calculate the intrinsic parameter of video camera and target with respect to the external parameter of camera coordinate system.
Concrete, by with the plane gridiron pattern as target, video camera is taken target at the image of diverse location, extract the angle point of grid on the gridiron pattern target as the unique point on target plane, set up the unique point on target plane and the relation between the point on the correspondence image plane, calculate the Intrinsic Matrix A of video camera and target with respect to the external parameter R of camera coordinate system
c, t
cCalibration process is as described below:
At first, find the solution homography matrix H:
If the unique point on target plane
Point on the corresponding plane of delineation
The mapping relations of the two are shown in formula (4):
In the formula (4), s is scale factor, r
C1, r
C2The unique point that is respectively the target plane transforms to the rotation matrix r of image coordinate system from the target plane coordinate system
cFirst two columns, t
cIt is corresponding translation vector; A is camera intrinsic parameter,
Wherein, α
x, α
yRespectively the scale factor of u axle and v axle in the image coordinate system, u
0, v
0Be optical centre, γ is u axle and the v axle out of plumb factor, in a lot of situations, makes γ=0.
Make H=A[r
C1r
C2t
c], then formula (4) can be write a Chinese character in simplified form into formula (5):
Order
Wherein,
The transposition of three column vectors of expression H matrix, the mapping relations of the point on the plane of delineation and the unique point on target plane can formulate (6) form:
According to n point on every width of cloth image of taking, adopt least square method can solve the corresponding matrix H of every width of cloth image
i, i=1,2 ...
Secondly, according to the H matrix of all images that obtain, find the solution intermediate vector b:
Make B=A
-TA
-1, then:
Write above-mentioned matrix as vector form, shown in formula (8):
b=[B
11 B
12 B
22 B
13 B
23 B
33]
T (8)
So, just have relational expression shown in the formula (9) to exist:
Wherein, h
1=[h
I1h
I2h
I3]
T, h
j=[h
J1h
J2h
J3]
T, which width of cloth image i and j all represent, i=1, and 2, J=1,2,
v
ij=[h
i1h
j1 h
i1h
j2+h
i2h
j1 h
i2h
j2 h
i3h
j1+h
i1h
j2 h
i3h
j2+h
i2h
j3 h
i3h
j3]
T
………………………………………………………(10)
With all v
IjForm matrix V, V is the matrix of 2n * 6, wherein, n=1,2, There is relational expression shown in the formula (11):
Vb=0 (11)
Can solve b by formula (11).
At last, can decomposite the Intrinsic Matrix A of video camera according to b, and according to A
-1Find the solution R
c, t
c:
The intrinsic parameter that decomposites video camera according to b is specially: utilize the Cholesky matrix decomposition algorithm to solve A b
-1, inverting obtains A again;
Demarcate the external parameter t of every width of cloth image
cFor:
r
c1=λA
-1h
1 (12)
r
c2=λA
-1h
2 (13)
r
c3=r
c1×r
c2 (14)
t
c=λA
-1h
3 (15)
λ=1/ ‖ A wherein
-1h
1‖=1/ ‖ A
-1h
2‖, so the rotation matrix in the external parameter of demarcating is:
R
c=[r
c1 r
c2 r
c3] (16)
(c) from laser radar coordinate system O
lX
lY
lZ
lBe transformed into camera coordinate system O
cX
cY
cZ
c
Wherein R and t are respectively rotation matrix and the translation vector that is transformed into camera coordinate system from the laser radar coordinate system, and R and t computation process are as follows:
With the plane gridiron pattern target at a black and white square interval, target is positioned over the video camera position different with the laser radar front, start simultaneously video camera and laser radar, gather visible images and range data and record.It is right to obtain so a series of " images-distance ".This target both had been used for the demarcation of camera intrinsic parameter A, can be used for video camera and the external parameter R of laser radar and the demarcation of t again.
As shown in Figure 3, take i target plane as example, the initial point O of camera coordinate system
cDistance to i target plane is λ
C, i, the vertical line direction is γ
C, iThree-dimensional laser radar coordinate origin O
lDistance to i target plane is λ
L, i, the vertical line direction is γ
L, i, computing method are as follows.
R wherein
3, c, iExpression R
C, iThe 3rd row.The camera coordinate system initial point can be write as following form to normal direction and the distance on all target planes:
Γ
c=[γ
c,1 γ
c,2…γ
c,n] (20)
Λ
c=[λ
c,1 λ
c,2…λ
c,n]
T (21)
Γ wherein
cBe 3 * n matrix, Λ
cBe n * 1 column vector, n is " distance-image " logarithm.
Use least square method to simulate the target plane from the target point data of Laser Radar Scanning, can obtain the laser radar coordinate origin is γ to the normal direction on i target plane
L, iAnd distance lambda
L, iIf x
L, iBe i target plane m
i* 3 some set, m
iThe quantity of target Plane-point,
Be the average of dot matrix set, then
So γ
L, iAnd λ
L, iFor
γ
l,i=U
3,l,i (24)
Wherein, U
3, l, iExpression U
L, iThe 3rd row.The laser radar coordinate origin can be write as following form to normal direction and the distance on all target planes:
Γ
l=[γ
l,1 γ
l,2…γ
l,n] (26)
Λ
l=[λ
l,1 λ
l,2…λ
l,n]
T (27)
Γ wherein
lBe 3 * n matrix, Λ
lBe n * 1 column vector, n is " distance-image " logarithm.
R=VU
T (28)
(d) range data is proofreaied and correct;
The car body angle of pitch of supposing inertial navigation equipment output is that α, roll angle are γ, and the range data after proofreading and correct so can obtain by following formula:
Second step: laser radar range data obstacle detection comprises the steps:
(1) range data pre-service;
Because there is certain noise in the impact of the factors such as laser radar drifted about, body surface characteristic and vehicle movement in the ranging data, therefore at first tackling the raw range data carries out pre-service.Comprise the noise spot and the mistake measuring point that are brought by disturbance in the measurement data, these points are carried out filtering, can improve the degree of accuracy of arithmetic speed and feature detection.In addition, because the mixed pixel phenomenon of laser has some data acnodes with front and back data point spacing is very large in the raw data, these are nonsensical for environment understanding.Therefore, for these scattered data pointses and isolated point, can adopt 3 * 3 template to carry out medium filtering and be processed.
(2) Region Segmentation;
The three-dimensional point cloud that the Laser Radar Scanning environment obtains is cut apart, obtained a plurality of zones, these zones are identified respectively.
For processing cloud data, multiple dividing method is arranged in the prior art.A kind of is will point three-dimensional coordinate and color unification be placed in the six-vector, the calculation criterion of three-dimensional coordinate and color distance is merged in definition, adopts the region growing method, and the plane of color similarity in the scene is split.Another kind is to adopt the plane that body surface is carried out match, and the plane seed is constantly carried out region growing, has realized cutting apart a cloud.The third is radial boundary arest neighbors (A Radially Bounded Nearest Neighbor, RBNN) clustering method, can cut apart putting cloud preferably, and step is as follows:
(a) retrieve certain three dimensions point p
iNearest k point in the field calculates by this k+1 and puts the point set Q that forms on every side
iNormal, as the normal vector of this analyzing spot.In the present embodiment, the span of k is 10~30, such as 10,20 or 30.
Circular is as follows: with Q
iIn point carry out svd U ∑ V
T, with V
TThe vector of unit length of minimum value in the middle corresponding ∑ is as p
iNormal
Above-mentioned svd is prior art.
The cartesian space distance of two three-dimensional point of definition is ρ
e(p
i, p
j)=‖ p
i-p
j‖, wherein, p
jCoordinate
p
jWith its normal
The composition characteristic vector
Angular distance is ρ
a(n
i, n
j)=0.5-0.5 *<n
i, n
j〉/‖ n
i‖ ‖ n
j‖, wherein,
For space length and angular distance arrange different threshold value ρ
E, max, ρ
A, max, when the space length between two points or angular distance during greater than respective threshold, think that namely these two points are not at the same area.
(c) to all consecutive point calculating space length ρ each other
eWith angle ρ
aDistance.If ρ
e≤ ρ
E, maxAnd ρ
a≤ ρ
A, max, so can be with these two some Cluster mergings.All consecutive point are merged to a zone as far as possible.
(d) if certain a bit can't with other cluster, so with regard to a newly-built zone.
(e) with a cluster is arranged after, check every class to have a little quantity, if quantity is less than a certain threshold value n
Cmin, and the average height of putting in the class is far longer than the height h of vehicle leaping over obstacles
Max, think that such is noise, can delete.Threshold value n
CminHeight h with the vehicle leaping over obstacles
MaxChoose according to actual conditions.
(3) zone identification;
Automatic driving vehicle travels in the lowered in field environment, has the various road conditions of meadow, road, trees, bushes etc., need to analyze identification to these landform, specifically comprises the steps:
(a) zoning feature;
Before the identification, regional several feature be need to calculate, average height, singular value, regional Surface Method vector etc. comprised.Computing method are as follows:
Singular value: [σ
1σ
2σ
3].The coordinate of having a few in this zone is formed matrix, carry out svd U ∑ V
T, the element in the ∑ is singular value, and with σ
1, σ
2, σ
3Descending ordering.
Zone Surface Method vector: γ.Minimum singular value σ
3Corresponding V
TIn vector of unit length.
(b) identification comprises the steps:
1. meadow identification;
When laser was got on the meadow, because blade of grass is at random, the data point that receives was very mixed and disorderly, and general edge detection method can't accurately detect.The maximum difference of meadow and ordinary road is that the density of analyzing spot is different, and namely penetrance is different.The thick grass that branches and leaves are sparse, the surface pore of laser scanning is more, and penetrance is larger; And dense bushes shoot and leaf growth is in great numbers, and then penetrance is less; If the laser scanning surface is hard surface, the distance value of each analyzing spot is evenly distributed, and its penetrance is approximate equalling zero just.When automatic driving vehicle travels lowered in field environment, the thick grass that covers the earth's surface can travel by.Therefore when detecting the larger thick grass of penetrance, should be designated the wheeled zone.
When satisfying following condition, be identified as the meadow.
1) the some average height is lower than vehicle obstacle clearing capability h in the zone
Max
2) penetrance of point is high in the zone, and singular value is distributed as σ
1≈ σ
2≈ σ
3
3) field method vector γ upwards.
Because in Region Segmentation, may be divided into different zones to the point on the meadow, therefore after identifying the meadow, these points need to be merged to the same area, and to mark this zone be the wheeled zone.Merging herein refers to the point in several zones is placed in the zone.
2. road Identification;
The maximum difference in road area and zone, meadow is that the laser beam penetration capacity is different.The surface ratio of road is more smooth, so light beam irradiates is on the road surface, and analyzing spot distributes more even.Road is the wheeled zone.
When satisfying following condition, be identified as road.
2) penetrance of point is low in the zone, σ
1>σ
2>>σ
3
3) field method vector γ upwards.
3. the identification of trunk;
Analyzing spot on the trunk distributes and generally is the part on the face of cylinder.When the regularity of distribution of singular value is σ
1>>σ
2≈ σ
3, when field method vector γ is positioned at horizontal direction, be identified as trunk.
4. bushes identification;
Analyzing spot all distributes in one plane on the bushes, when singular value is σ
1>σ
2>σ
3, when field method vector γ also is positioned at horizontal direction, be identified as bushes.
In the present invention, as singular value σ
1With σ
2Ratio σ
1/ σ
2In the time of in [14], think σ
1≈ σ
2As singular value σ
1With σ
2Ratio σ
1/ σ
2Greater than 50 o'clock, think singular value σ
1>>σ
2
The 3rd step: camera review is processed, and specifically comprises the steps:
(1) laser radar data in the zone that will divide out is transformed under the image coordinate system of video camera;
Change according to the coordinate transformation relation in the first step.
(2) the wheeled zone of preliminary judgement is further identified;
Because in the lowered in field environment, except the road of wheeled, thick grass, may have water beach, stone etc.Laser Radar Scanning is on the water beach, part laser can not reflect, cause the Laser Radar Scanning data to have the cavity, also might be reflected back a part of laser, the height of this part point is all on surface level, therefore only adopt laser radar data to carry out obstacle identification, be easy to the water beach is treated as the wheeled zone.And in fact, present technology also can't be assessed the degree of depth on water beach, soft degree, so in a single day automatic driving vehicle sail into, just may cause danger.May have in addition the stone of some in the running environment, although volume is little, automatic driving vehicle is jolted, affect the speed of vehicle.Only adopt the range data of laser radar, because the resolution of laser radar is relatively low, can't accurately depict the stone shape, so adopt the image of video camera this moment, can judge these less barriers.
For with wheeled zone and can not running region separately, use gauss hybrid models (GaussianMixture Models, GMM) that color is classified.The color in wheeled zone is obeyed certain statistical law, and the color value of each pixel can represent with the mixture model of several Gaussian distribution.Behind the image that obtains new wheeled zone, just upgrade the model that mixed Gaussian distributes, if the pixel of present image and model are complementary, judge that so this point is the point in the wheeled zone, otherwise just judge that this point is the point of other character.
Specifically comprise the steps:
(a) set up gauss hybrid models;
The sequence of supposing pixel in the image is { x
1, x
2X
i, the value of i pixel is x
i=[R
i, G
i, B
i], the probability of this pixel is:
Wherein K is the number of Gaussian distribution, generally selects 3~5, ω
K, iThe estimated value of weight, μ
K, iThe average of k Gaussian distribution, ∑
K, iBe the covariance matrix of k Gaussian distribution, suppose that generally covariance matrix meets
Rule.The probability density of pixel i in k gauss of distribution function can be calculated by following formula:
μ
iBe the average of i Gaussian distribution, ∑ is the covariance matrix of Gaussian distribution;
The process of setting up the gauss hybrid models of wheeled region color feature is comprised of following step:
1. the image-region of processing is carried out the conversion of color space.
2. the initialization parameters comprises overall wheeled region threshold T, learning rate α, Gauss model quantity K, and initializes weights ω
K, i
3. the color value of the road of the wheeled of the initial appointment of reading images and meadow image, as the average of gauss hybrid models, variance is set up gauss hybrid models for predetermined empirical value.
4. read the view data in the wheeled zone of initial appointment, with color value and the comparison of existing Gauss model of each pixel, judge whether following formula is set up:
In the formula, x
iBe the color value of pixel,
It is the variance of k Gaussian distribution model.
Coupling: upgrade parameter and the weight of k gauss hybrid models, parameter comprises expectation, variance, the study factor.
μ
i=(1-ρ)μ
i-1+ρx
i (34)
ρ=αη(x
i|μ
k,σ
k) (36)
In the formula, μ
kAnd σ
kBe respectively mean value vector and the variance of k Gaussian distribution model this moment, α is learning rate, and ρ is the study factor that model adapts to, and acts on similar to α.
Do not mate: if k<K increases a Gauss model, the color value that new pixel is got in new Gauss model distribution is average, and variance and weight are got empirical value.If k=K replaces the minimum model of weight with new Gaussian distribution model, average and variance are the same.
Weights omega
kMore new formula be
ω
k,i=(1-α)ω
k,i-1+αM
k,i (37)
In the formula, ω
K, iBe current weight, α is learning rate, ω
K, i-1Be last respective weights, M
K, iBe the coupling quantized value, if coupling M
K, i=1, do not mate then M
K, i=0.
(b) zone identification;
The image in the Gauss model of the wheeled field color set up and current zone to be determined is carried out the absolute value comparison, if | x
i-f| 〉=T, T are threshold value, think that so this point is non-wheeled region point.The value of T is chosen according to actual needs.Point in this zone is set up corresponding gauss hybrid models, and the process of setting up is identical with the model in wheeled zone, and adds in the knowledge base, is convenient to analyze.
The 4th step: the result merges, and specifically comprises the steps:
The differentiation result of laser radar and video camera is merged, judge wheeled zone and can not running region, constructing environment map.
Vivid, easy-to-use grating map usually is used for simulating ground surface environment.Grating map is proposed by Elfes and Moravec the earliest, afterwards development.Existing grating map generally can be divided into two classes, two-dimensional grid map and 3 d grid map.Each grid cell in the two-dimensional grid map can only represent to have and without two states, it is convenient still to use; And the 3 d grid map can also represent other attributes of this unit, for example highly, thus meticulousr expression terrain environment.During the structure grating map, the resolution of environment landform and the size of lattice dimensions are closely related.If the high resolving power of requirement will reduce the size of grid so, grid quantity increases, and causes the complicacy of calculating.And reduce grid quantity, just must increase lattice dimensions, decrease resolution.Therefore, the foundation of grating map need to consider the various factorss such as arithmetic capability of the driveability of automatic driving vehicle itself, sensor resolution, environment topographic features, system.
The differentiation result of comprehensive laser radar and video camera is divided into wheeled zone and can not running region with grating map, and the automatic driving vehicle of being more convenient for is planned the path, has therefore set up two-dimentional grating map.The principle of above-mentioned division is:
(1) if it is can not running region that laser radar is differentiated the result, then grating map being divided into can not running region;
(2) if it is the wheeled zone that laser radar is differentiated the result, then needs to differentiate the result according to video camera and analyze;
(a) if it is the wheeled zone that video camera is differentiated the result, then be the wheeled zone;
(b) if it is can not running region that video camera is differentiated the result, then being can not running region.
(3) if the zone that exists laser radar and video camera all not to detect should the zone be zone of ignorance then.
The detailed process of having set up the two-dimensional grid map is as follows:
(1) in bodywork reference frame, divides grid size.
(2) laser radar data and video camera are merged after identification the wheeled zone, can not running region, zone of ignorance carries out mark.
Be described below in conjunction with the field environment disorder detection method of a specific embodiment to a kind of fusion distance of the present invention and image information:
Start three-dimensional laser radar, scene around the scanning obtains one group of range data of describing surrounding environment, gathers simultaneously this moment attitude information of inertial navigation equipment.
The first step: set up mathematical model;
Set up coordinate system according to above-mentioned steps, the angle of pitch and attitude angle according to inertial navigation equipment output are corrected to this group range data in the coordinate system of true origin on surface level.
, and then be transformed under the bodywork reference frame.
Second step: laser radar range data obstacle detection;
(1) data pre-service;
Each point in the scanning distance data successively adopts 3 * 3 template that data are carried out medium filtering.
(2) Region Segmentation;
Adopt region segmentation method, these range data are cut apart, parameter is set to k=20, n
Cmin=150, h
Max=0.1 meter, ρ
E, max=0.04 meter, ρ
A, max=0.011.Can obtain some zones after cutting apart.
(3) zone identification;
The zoning feature is comprising average height
, singular value [σ
1σ
2σ
3], regional Surface Method vector γ.
According to the characteristic quantity in zone, which kind of road conditions the identification regional is, such as meadow, trees, road, bushes etc.Meadow and road can be thought the wheeled zone, and road and bushes are obstacle.Parameter is set to h
Max=0.1.
Start the video camera photographic images.With the range data in the wheeled zone, be transformed under the camera coordinate system, according to camera review, obtain the corresponding color information in this zone.
The 3rd step: camera review is processed;
Analyze camera review, set up the gauss hybrid models in wheeled zone.Parameter is set to K=5, and initializes weights is ω=0.05, overall wheeled region threshold T=0.7, learning rate α=0.2.
The 4th step: the result merges;
The recognition result of the data of adjusting the distance and camera review merges.According to gauss hybrid models, further analysis image is identified the obstacle in the wheeled zone, and new obstacle is added in the knowledge base.
The wheeled zone of identification after merging in conjunction with laser radar data and video camera, can not running region, zone of ignorance, give corresponding attribute for corresponding grid, set up grating map.
Claims (8)
1. the field environment disorder detection method of a fusion distance and image information comprises the steps:
The first step: set up mathematical model;
Described foundation in the mathematical model step set up car body and laser radar, video camera and inertial navigation coordinate system, and described coordinate system is interrelated, specifically comprises the steps:
(1) sets up bodywork reference frame O
bX
bY
bZ
b
Initial point O
bBe positioned at certain point of fixity of car body, initial point be the laser radar light source with plane that ground overlaps on projection, X
bAxle points to car body front-right, Y
bAxle points to car body dead ahead, Z
bAxle points to the car body top; The bodywork reference frame initial point is h to the height on ground
0
(2) set up laser radar coordinate system O
lX
lY
lZ
l
Define this coordinate origin O
lBe positioned at the laser radar light source center, X
lAxle straight up, Y
lAxle horizontal to the right, Z
lAxle points to car body the place ahead;
(3) set up camera coordinate system O
cX
cY
cZ
c
Coordinate origin O
cBe positioned at the focus of camera lens, X
cAxle and plane of delineation horizontal direction parallel, direction is to the right; Y
cThe vertical direction of axle and the plane of delineation is parallel, and direction is downward; Z
cAxle is perpendicular to the plane of delineation, and is directed straight ahead;
(4) set up inertial navigation coordinate system O
IX
IY
IZ
I
The inertial navigation coordinate system is consistent with the bodywork reference frame direction;
(5) determine the coordinate system transformational relation, specifically comprise the steps:
(a) from laser radar coordinate system O
lX
lY
lZ
lBe transformed into bodywork reference frame O
bX
bY
bZ
b
First around Y
lThe axle half-twist is again around Z
lThe axle half-twist is then along Z
lThe downward translation h of direction of principal axis
0Distance;
Each meaning of parameters is same as described above in the formula;
(b) camera coordinate system is transformed into image coordinate system;
According to existing video camera pin-hole model, convert the three-dimensional coordinate of putting under the image coordinate system on the image location of pixels;
Wherein A is camera intrinsic parameter,
α
x, α
yRespectively the scale factor of u axle and v axle in the image coordinate system, u
0, v
0Be optical centre, γ is u axle and the v axle out of plumb factor, in a lot of situations, makes γ=0; Other meaning of parameters is same as described above in the formula;
(c) from laser radar coordinate system O
lX
lY
lZ
lBe transformed into camera coordinate system O
cX
cY
cZ
c
(17)
Wherein R and t are respectively rotation matrix and the translation vector that is transformed into camera coordinate system from the laser radar coordinate system; Other meaning of parameters is same as described above in the formula
(d) range data is proofreaied and correct;
The car body angle of pitch of supposing inertial navigation equipment output is that α, roll angle are γ, and the range data after the correction obtains by following formula:
Meaning of parameters is same as described above in the formula;
Second step: laser radar range data obstacle detection;
The 3rd step: camera review is processed;
The 4th step: the result merges.
2. the field environment disorder detection method of a kind of fusion distance according to claim 1 and image information, it is characterized in that: described laser radar range data obstacle detection step comprises the steps:
(1) range data pre-service;
The laser radar range data is carried out filtering;
(2) Region Segmentation;
The three-dimensional point cloud that the Laser Radar Scanning environment obtains is cut apart, obtained a plurality of zones;
(3) zone identification;
Meadow, road, trees, bushes are carried out analyzing identification.
3. the field environment disorder detection method of a kind of fusion distance according to claim 2 and image information, it is characterized in that: described Region Segmentation step comprises the steps:
(a) retrieve certain three dimensions point p
iNearest k point in the field calculates by this k+1 and puts the point set Q that forms on every side
iNormal, as the normal vector of this analyzing spot;
(b) three dimensions is put p
iCoordinate
And normal
The composition characteristic vector
The cartesian space distance of two three-dimensional point is ρ
e(p
i, p
j)=|| p
i-p
j||, wherein, p
jCoordinate
p
jWith its normal
The composition characteristic vector
Angular distance is ρ
a(n
i, n
j)=0.5-0.5 *<n
i, n
j〉/|| n
i|| | n
j||, wherein,
For space length and angular distance arrange different threshold value ρ
E, max, ρ
A, maxWhen the space length between two points or angular distance during greater than respective threshold, think that these two points are not at the same area;
(c) to all consecutive point calculating space length ρ each other
eWith angle ρ
aDistance; If ρ
e≤ ρ
E, maxAnd ρ
a≤ ρ
A, max, with these two some Cluster mergings;
(d) if certain a bit can't with other cluster, a newly-built zone;
(e) with a cluster is arranged after, check every class to have a little quantity, if quantity is less than a certain threshold value n
Cmin, and the average height of putting in the class is far longer than the height h of vehicle leaping over obstacles
Max, such is noise, with its deletion.
4. the field environment disorder detection method of a kind of fusion distance according to claim 2 and image information, it is characterized in that: in the described Region Segmentation step, the span of k is 10~30.
5. the field environment disorder detection method of a kind of fusion distance according to claim 2 and image information, it is characterized in that: described regional identification step specifically comprises the steps:
(a) zoning feature;
Calculate average height, singular value, regional Surface Method vector; Computing method are as follows:
Singular value: [σ
1σ
2σ
3]; The coordinate of having a few in this zone is formed matrix, carry out svd U ∑ V
T, the element in the ∑ is singular value, and with σ
1, σ
2, σ
3Descending ordering;
Zone Surface Method vector: γ; Minimum singular value σ
3Corresponding V
TIn vector of unit length;
(b) identification comprises the steps:
1. meadow identification;
When satisfying following condition, be identified as the meadow;
1) the some average height is lower than vehicle obstacle clearing capability h in the zone
Max
2) penetrance of point is high in the zone, and singular value is distributed as σ
1≈ σ
2≈ σ
3
3) field method vector γ upwards;
After identifying the meadow, these points are merged to the same area, and to mark this zone be wheeled zone;
2. road Identification;
When satisfying following condition, be identified as road:
2) penetrance of point is low in the zone, σ
1>σ
2>>σ
3
3) field method vector γ upwards;
3. trunk identification;
When the regularity of distribution of singular value is σ
1>>σ
2≈ σ
3, when field method vector γ is positioned at horizontal direction, be identified as trunk;
4. bushes identification;
When singular value is σ
1>σ
2>σ
3, when field method vector γ also is positioned at horizontal direction, be identified as bushes.
6. the field environment disorder detection method of a kind of fusion distance according to claim 5 and image information is characterized in that: in the described identification step, as singular value σ
1With σ
2Ratio σ
1/ σ
2In the time of in [1 4], σ
1≈ σ
2As singular value σ
1With σ
2Ratio σ
1/ σ
2Greater than 50 o'clock, singular value σ
1>>σ
2
7. the field environment disorder detection method of a kind of fusion distance according to claim 1 and image information, it is characterized in that: described camera review treatment step specifically comprises the steps:
(1) laser radar data in the zone that will divide out is transformed under the image coordinate system of video camera;
(2) the wheeled zone of preliminary judgement is further identified;
Specifically comprise the steps:
(a) set up gauss hybrid models;
The sequence of supposing pixel in the image is { x
1, x
2X
i, the value of i pixel is x
i=[R
i, G
i, B
i], the probability of this pixel is:
Wherein K is the number of Gaussian distribution, selects 3~5, ω
K, iThe estimated value of weight, μ
K, iThe average of k Gaussian distribution, ∑
K, iBe the covariance matrix of k Gaussian distribution, suppose that covariance matrix meets
Rule; The probability density of pixel i in k gauss of distribution function calculated by following formula:
μ
iBe the average of i Gaussian distribution, ∑ is the covariance matrix of Gaussian distribution;
(b) zone identification;
The image in the Gauss model of the wheeled field color set up and current zone to be determined is carried out the absolute value comparison, if | x
i-f| 〉=T, T are threshold value, and then this is non-wheeled region point.
8. the field environment disorder detection method of a kind of fusion distance according to claim 1 and image information, it is characterized in that: described as a result fusion steps specifically comprises the steps:
The differentiation result of laser radar and video camera is merged, judge wheeled zone and can not running region, constructing environment map; The differentiation result of comprehensive laser radar and video camera is divided into wheeled zone and can not running region with grating map, and the principle of division is:
(1) if it is can not running region that laser radar is differentiated the result, then grating map being divided into can not running region;
(2) if it is the wheeled zone that laser radar is differentiated the result, then needs to differentiate the result according to video camera and analyze;
(a) if it is the wheeled zone that video camera is differentiated the result, then be the wheeled zone;
(b) if it is can not running region that video camera is differentiated the result, then be can not running region;
(3) if the zone that exists laser radar and video camera all not to detect should the zone be zone of ignorance then.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201010195586 CN101975951B (en) | 2010-06-09 | 2010-06-09 | Field environment barrier detection method fusing distance and image information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201010195586 CN101975951B (en) | 2010-06-09 | 2010-06-09 | Field environment barrier detection method fusing distance and image information |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101975951A CN101975951A (en) | 2011-02-16 |
CN101975951B true CN101975951B (en) | 2013-03-20 |
Family
ID=43575853
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN 201010195586 Expired - Fee Related CN101975951B (en) | 2010-06-09 | 2010-06-09 | Field environment barrier detection method fusing distance and image information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101975951B (en) |
Families Citing this family (104)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102338621B (en) * | 2011-04-27 | 2013-11-20 | 天津工业大学 | Method for detecting height of obstacle for indoor visual navigation |
CN102538766A (en) * | 2011-12-21 | 2012-07-04 | 武汉科技大学 | Obstacle test method for active intelligent vehicle |
US9626599B2 (en) * | 2012-04-09 | 2017-04-18 | GM Global Technology Operations LLC | Reconfigurable clear path detection system |
CN103514427B (en) * | 2012-06-15 | 2016-12-21 | 株式会社理光 | Vehicle checking method and system |
CN102779280B (en) * | 2012-06-19 | 2014-07-30 | 武汉大学 | Traffic information extraction method based on laser sensor |
CN103908346B (en) * | 2012-12-31 | 2016-04-20 | 复旦大学 | A kind of High Precision Automatic Use of Neuronavigation spatial registration method |
CN104899855A (en) * | 2014-03-06 | 2015-09-09 | 株式会社日立制作所 | Three-dimensional obstacle detection method and apparatus |
CN103884281B (en) * | 2014-03-18 | 2015-10-21 | 北京控制工程研究所 | A kind of rover obstacle detection method based on initiating structure light |
CN106164698B (en) * | 2014-03-31 | 2019-11-26 | 三美电机株式会社 | Radar module, conveying equipment and object identification method |
US9098754B1 (en) * | 2014-04-25 | 2015-08-04 | Google Inc. | Methods and systems for object detection using laser point clouds |
KR20160054825A (en) * | 2014-11-07 | 2016-05-17 | 현대모비스 주식회사 | Apparatus and method for judging drivable space |
CN104503449A (en) * | 2014-11-24 | 2015-04-08 | 杭州申昊科技股份有限公司 | Positioning method based on environment line features |
CN104574376B (en) * | 2014-12-24 | 2017-08-08 | 重庆大学 | Avoiding collision based on binocular vision and laser radar joint verification in hustle traffic |
CN104482872B (en) * | 2014-12-31 | 2018-02-13 | 中联重科股份有限公司 | Curb boundary detection method and device based on parallel structured light and engineering machinery |
JP6418961B2 (en) * | 2015-01-28 | 2018-11-07 | シャープ株式会社 | Obstacle detection device, moving object, obstacle detection method, and obstacle detection program |
JP6593588B2 (en) * | 2015-02-16 | 2019-10-23 | パナソニックIpマネジメント株式会社 | Object detection apparatus and object detection method |
CN104833364B (en) * | 2015-05-07 | 2018-05-18 | 深圳市爱民科技有限公司 | A kind of emergency route indicating means on bump course |
CN104933708A (en) * | 2015-06-07 | 2015-09-23 | 浙江大学 | Barrier detection method in vegetation environment based on multispectral and 3D feature fusion |
CN106326810B (en) * | 2015-06-25 | 2019-12-24 | 株式会社理光 | Road scene recognition method and equipment |
CN105205859B (en) * | 2015-09-22 | 2018-05-15 | 王红军 | A kind of method for measuring similarity of the environmental characteristic based on 3 d grid map |
CN105204510B (en) * | 2015-10-09 | 2016-06-22 | 福州华鹰重工机械有限公司 | A kind of generation method for pinpoint probability map and device |
CN105301577B (en) * | 2015-10-10 | 2016-07-06 | 福州华鹰重工机械有限公司 | A kind of laser intensity calibration steps and device |
CN105571599A (en) * | 2015-12-21 | 2016-05-11 | 小米科技有限责任公司 | Road condition information processing method and device |
CN105512641B (en) * | 2015-12-31 | 2019-02-19 | 哈尔滨工业大学 | A method of dynamic pedestrian and vehicle under calibration sleet state in video |
CN105818763B (en) * | 2016-03-09 | 2018-06-22 | 睿驰智能汽车(广州)有限公司 | A kind of method, apparatus and system of determining vehicle periphery object distance |
CN108779984A (en) * | 2016-03-16 | 2018-11-09 | 索尼公司 | Signal handling equipment and signal processing method |
CN105956527B (en) * | 2016-04-22 | 2019-10-22 | 百度在线网络技术(北京)有限公司 | Detection of obstacles outcome evaluation method and apparatus for automatic driving car |
US9672446B1 (en) * | 2016-05-06 | 2017-06-06 | Uber Technologies, Inc. | Object detection for an autonomous vehicle |
US10394237B2 (en) * | 2016-09-08 | 2019-08-27 | Ford Global Technologies, Llc | Perceiving roadway conditions from fused sensor data |
CN106447697B (en) * | 2016-10-09 | 2018-10-26 | 湖南穗富眼电子科技有限公司 | A kind of specific moving-target fast tracking method based on moving platform |
CN106546996A (en) * | 2016-10-15 | 2017-03-29 | 北海益生源农贸有限责任公司 | Road Detection and tracking based on four line laser radars |
RU2728836C2 (en) | 2016-10-28 | 2020-07-31 | Ппг Индастриз Огайо, Инк. | Coatings to increase detection distance to object detected by means of electromagnetic radiation of near infrared range |
US10163015B2 (en) * | 2016-11-16 | 2018-12-25 | Ford Global Technologies, Llc | Detecting foliage using range data |
CN106650640B (en) * | 2016-12-05 | 2020-03-03 | 浙江大学 | Negative obstacle detection method based on laser radar point cloud local structure characteristics |
CN106646474A (en) * | 2016-12-22 | 2017-05-10 | 中国兵器装备集团自动化研究所 | Unstructured road accidented barrier detection apparatus |
CN110235026A (en) * | 2017-01-26 | 2019-09-13 | 御眼视觉技术有限公司 | The automobile navigation of image and laser radar information based on alignment |
CN108535736A (en) * | 2017-03-05 | 2018-09-14 | 苏州中德睿博智能科技有限公司 | Three dimensional point cloud acquisition methods and acquisition system |
CN107092039A (en) * | 2017-03-10 | 2017-08-25 | 南京沃杨机械科技有限公司 | Farm machinery navigation farm environment cognitive method |
CN106909148A (en) * | 2017-03-10 | 2017-06-30 | 南京沃杨机械科技有限公司 | Based on the unmanned air navigation aid of agricultural machinery that farm environment is perceived |
CN106891889A (en) * | 2017-03-10 | 2017-06-27 | 南京沃杨机械科技有限公司 | Agricultural machinery is unmanned to use farm environment cognitive method |
CN108629231B (en) | 2017-03-16 | 2021-01-22 | 百度在线网络技术(北京)有限公司 | Obstacle detection method, apparatus, device and storage medium |
CN106896353A (en) * | 2017-03-21 | 2017-06-27 | 同济大学 | A kind of unmanned vehicle crossing detection method based on three-dimensional laser radar |
CN107194962B (en) * | 2017-04-01 | 2020-06-05 | 深圳市速腾聚创科技有限公司 | Point cloud and plane image fusion method and device |
CN107421507A (en) * | 2017-04-28 | 2017-12-01 | 上海华测导航技术股份有限公司 | Streetscape data acquisition measuring method |
CN107169986B (en) * | 2017-05-23 | 2019-09-17 | 北京理工大学 | A kind of obstacle detection method and system |
CN107167141B (en) * | 2017-06-15 | 2020-08-14 | 同济大学 | Robot autonomous navigation system based on double laser radars |
CN109116374B (en) * | 2017-06-23 | 2021-08-17 | 百度在线网络技术(北京)有限公司 | Method, device and equipment for determining distance of obstacle and storage medium |
CN107215339B (en) * | 2017-06-26 | 2019-08-23 | 地壳机器人科技有限公司 | The lane-change control method and device of automatic driving vehicle |
CN109212532B (en) * | 2017-07-04 | 2021-08-20 | 百度在线网络技术(北京)有限公司 | Method and apparatus for detecting obstacles |
CN109238221B (en) * | 2017-07-10 | 2021-02-26 | 上海汽车集团股份有限公司 | Method and device for detecting surrounding environment of vehicle |
JP6928499B2 (en) * | 2017-07-21 | 2021-09-01 | 株式会社タダノ | Guide information display device and work equipment |
CN107703935A (en) * | 2017-09-12 | 2018-02-16 | 安徽胜佳和电子科技有限公司 | Multiple data weighting fusions carry out method, storage device and the mobile terminal of avoidance |
CN107673283A (en) * | 2017-11-17 | 2018-02-09 | 芜湖金智王机械设备有限公司 | The control system of unmanned fork truck |
CN107966700A (en) * | 2017-11-20 | 2018-04-27 | 天津大学 | A kind of front obstacle detecting system and method for pilotless automobile |
CN109816135B (en) * | 2017-11-22 | 2023-09-26 | 博泰车联网科技(上海)股份有限公司 | Cluster risk avoiding method, system, computer readable storage medium and service terminal |
CN108256413B (en) * | 2017-11-27 | 2022-02-25 | 科大讯飞股份有限公司 | Passable area detection method and device, storage medium and electronic equipment |
CN108021132A (en) * | 2017-11-29 | 2018-05-11 | 芜湖星途机器人科技有限公司 | Paths planning method |
CN108037490B (en) * | 2017-11-30 | 2020-07-24 | 中煤航测遥感集团有限公司 | Method and system for detecting positioning accuracy of ground penetrating radar |
JP6888538B2 (en) * | 2017-12-18 | 2021-06-16 | トヨタ自動車株式会社 | Vehicle control device |
CN108398672B (en) * | 2018-03-06 | 2020-08-14 | 厦门大学 | Forward-tilting 2D laser radar mobile scanning-based pavement and obstacle detection method |
CN108573492B (en) * | 2018-04-02 | 2020-04-03 | 电子科技大学 | Real-time radar detection area detection method |
CN108549087B (en) * | 2018-04-16 | 2021-10-08 | 北京瑞途科技有限公司 | Online detection method based on laser radar |
CN108847026A (en) * | 2018-05-31 | 2018-11-20 | 安徽四创电子股份有限公司 | A method of it is converted based on matrix coordinate and realizes that data investigation is shown |
CN109100741B (en) * | 2018-06-11 | 2020-11-20 | 长安大学 | Target detection method based on 3D laser radar and image data |
CN112424629B (en) * | 2018-06-26 | 2024-04-09 | 苏州宝时得电动工具有限公司 | Electric device using radar |
WO2020006764A1 (en) * | 2018-07-06 | 2020-01-09 | 深圳前海达闼云端智能科技有限公司 | Path detection method, related device, and computer readable storage medium |
CN110378904B (en) * | 2018-07-09 | 2021-10-01 | 北京京东尚科信息技术有限公司 | Method and device for segmenting point cloud data |
CN110850856B (en) * | 2018-07-25 | 2022-11-25 | 北京欣奕华科技有限公司 | Data processing method and device and robot |
CN109099923A (en) * | 2018-08-20 | 2018-12-28 | 江苏大学 | Road scene based on laser, video camera, GPS and inertial navigation fusion characterizes system and method |
CN110909569B (en) * | 2018-09-17 | 2022-09-23 | 深圳市优必选科技有限公司 | Road condition information identification method and terminal equipment |
CN110927762B (en) * | 2018-09-20 | 2023-09-01 | 上海汽车集团股份有限公司 | Positioning correction method, device and system |
CN109116867B (en) * | 2018-09-28 | 2020-04-14 | 拓攻(南京)机器人有限公司 | Unmanned aerial vehicle flight obstacle avoidance method and device, electronic equipment and storage medium |
CN109444916B (en) * | 2018-10-17 | 2023-07-04 | 上海蔚来汽车有限公司 | Unmanned driving drivable area determining device and method |
CA3119767A1 (en) | 2018-11-13 | 2020-05-22 | Ppg Industries Ohio, Inc. | Method of detecting a concealed pattern |
CN109633686B (en) * | 2018-11-22 | 2021-01-19 | 浙江中车电车有限公司 | Method and system for detecting ground obstacle based on laser radar |
CN111256651B (en) * | 2018-12-03 | 2022-06-07 | 北京京东乾石科技有限公司 | Week vehicle distance measuring method and device based on monocular vehicle-mounted camera |
US11561329B2 (en) | 2019-01-07 | 2023-01-24 | Ppg Industries Ohio, Inc. | Near infrared control coating, articles formed therefrom, and methods of making the same |
CN109781129A (en) * | 2019-01-28 | 2019-05-21 | 重庆邮电大学 | A kind of road surface safety detection system and method based on inter-vehicular communication |
CN109696173A (en) * | 2019-02-20 | 2019-04-30 | 苏州风图智能科技有限公司 | A kind of car body air navigation aid and device |
CN109901142B (en) * | 2019-02-28 | 2021-03-30 | 东软睿驰汽车技术(沈阳)有限公司 | Calibration method and device |
CN111694903B (en) * | 2019-03-11 | 2023-09-12 | 北京地平线机器人技术研发有限公司 | Map construction method, device, equipment and readable storage medium |
CN109946701B (en) * | 2019-03-26 | 2021-02-02 | 新石器慧通(北京)科技有限公司 | Point cloud coordinate conversion method and device |
CN111830470B (en) * | 2019-04-16 | 2023-06-27 | 杭州海康威视数字技术股份有限公司 | Combined calibration method and device, target object detection method, system and device |
CN110007293B (en) * | 2019-04-24 | 2021-11-02 | 禾多科技(北京)有限公司 | On-line calibration method of field end multi-line beam laser radar |
CN112101069A (en) | 2019-06-18 | 2020-12-18 | 华为技术有限公司 | Method and device for determining driving area information |
CN110243380B (en) * | 2019-06-26 | 2020-11-24 | 华中科技大学 | Map matching method based on multi-sensor data and angle feature recognition |
CN112179361B (en) * | 2019-07-02 | 2022-12-06 | 华为技术有限公司 | Method, device and storage medium for updating work map of mobile robot |
CN110346808B (en) * | 2019-07-15 | 2023-01-31 | 上海点积实业有限公司 | Point cloud data processing method and system of laser radar |
JP2021018073A (en) * | 2019-07-17 | 2021-02-15 | 本田技研工業株式会社 | Information providing device, information providing method, and program |
CN110488320B (en) * | 2019-08-23 | 2023-02-03 | 南京邮电大学 | Method for detecting vehicle distance by using stereoscopic vision |
CN110702028B (en) * | 2019-09-04 | 2020-09-15 | 中国农业机械化科学研究院 | Three-dimensional detection positioning method and device for orchard trunk |
CN110688440B (en) * | 2019-09-29 | 2022-03-04 | 中山大学 | Map fusion method suitable for less sub-map overlapping parts |
CN110567135A (en) * | 2019-10-08 | 2019-12-13 | 珠海格力电器股份有限公司 | air conditioner control method and device, storage medium and household equipment |
CN110807439B (en) * | 2019-11-12 | 2022-11-25 | 银河水滴科技(北京)有限公司 | Method and device for detecting obstacle |
CN110909671B (en) * | 2019-11-21 | 2020-09-29 | 大连理工大学 | Grid map obstacle detection method integrating probability and height information |
EP3905106A1 (en) * | 2020-04-27 | 2021-11-03 | Aptiv Technologies Limited | Method for determining a drivable area |
CN111551958B (en) * | 2020-04-28 | 2022-04-01 | 北京踏歌智行科技有限公司 | Mining area unmanned high-precision map manufacturing method |
CN111879287B (en) * | 2020-07-08 | 2022-07-15 | 河南科技大学 | Forward terrain three-dimensional construction method of low-speed vehicle based on multiple sensors |
CN112698301B (en) * | 2020-12-11 | 2024-06-28 | 中国科学院微电子研究所 | Laser radar target identification method, device and equipment based on distance strength association |
CN112684430A (en) * | 2020-12-23 | 2021-04-20 | 哈尔滨工业大学(威海) | Indoor old person walking health detection method and system, storage medium and terminal |
CN113484875B (en) * | 2021-07-30 | 2022-05-24 | 燕山大学 | Laser radar point cloud target hierarchical identification method based on mixed Gaussian ordering |
CN115540892B (en) * | 2022-11-28 | 2023-03-24 | 北京理工大学深圳汽车研究院(电动车辆国家工程实验室深圳研究院) | Obstacle-detouring terminal point selection method and system for fixed line vehicle |
CN116985803B (en) * | 2023-09-26 | 2023-12-29 | 赛奎鹰智能装备(威海)有限责任公司 | Self-adaptive speed control system and method for electric scooter |
CN118279876B (en) * | 2024-05-23 | 2024-08-16 | 陕西交通职业技术学院 | Automatic obstacle avoidance method and system for cleaning vehicle based on image processing |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101701818A (en) * | 2009-11-05 | 2010-05-05 | 上海交通大学 | Method for detecting long-distance barrier |
-
2010
- 2010-06-09 CN CN 201010195586 patent/CN101975951B/en not_active Expired - Fee Related
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101701818A (en) * | 2009-11-05 | 2010-05-05 | 上海交通大学 | Method for detecting long-distance barrier |
Non-Patent Citations (5)
Title |
---|
苏建刚等.激光制导武器弹目视线半实物仿真技术研究.《系统仿真学报》.2007,第19卷(第8期),1717-1720. * |
赵诚等.基于粒子群优化的多分辨率图像融合.《兵工学报》.2010,第31卷(第2期),171-176. * |
邓志红灯.基于不完全观测数据的多速率多传感器数据融合.《系统工程与电子技术》.2010,第32卷(第5期),886-890. * |
邓志红等.一种改进的视觉传感器与激光测距雷达特征匹配点提取算法.《光学技术》.2010,第36卷(第1期),43-48. * |
邓志红等.转台角位置基准误差对激光捷联惯导标定的影响分析.《中国惯性技术学报》.2009,第17卷(第4期),498-505. * |
Also Published As
Publication number | Publication date |
---|---|
CN101975951A (en) | 2011-02-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101975951B (en) | Field environment barrier detection method fusing distance and image information | |
US11885910B2 (en) | Hybrid-view LIDAR-based object detection | |
US11681746B2 (en) | Structured prediction crosswalk generation | |
US11790668B2 (en) | Automated road edge boundary detection | |
CN106204705B (en) | A kind of 3D point cloud dividing method based on multi-line laser radar | |
Guan et al. | Automated road information extraction from mobile laser scanning data | |
Zhu et al. | 3d lidar point cloud based intersection recognition for autonomous driving | |
CN110235026A (en) | The automobile navigation of image and laser radar information based on alignment | |
JP2019527832A (en) | System and method for accurate localization and mapping | |
CN112346463B (en) | Unmanned vehicle path planning method based on speed sampling | |
CN110531376A (en) | Detection of obstacles and tracking for harbour automatic driving vehicle | |
CN106199558A (en) | Barrier method for quick | |
CN113673282A (en) | Target detection method and device | |
US20230266473A1 (en) | Method and system for object detection for a mobile robot with time-of-flight camera | |
Silver et al. | Experimental analysis of overhead data processing to support long range navigation | |
CN113009453B (en) | Mine road edge detection and mapping method and device | |
Zhang et al. | Rapid inspection of pavement markings using mobile LiDAR point clouds | |
Browning et al. | 3D Mapping for high-fidelity unmanned ground vehicle lidar simulation | |
Zhao | Recognizing features in mobile laser scanning point clouds towards 3D high-definition road maps for autonomous vehicles | |
Vishnyakov et al. | Semantic scene understanding for the autonomous platform | |
CN113762195A (en) | Point cloud semantic segmentation and understanding method based on road side RSU | |
Guan | Automated extraction of road information from mobile laser scanning data | |
Huang et al. | Ground filtering algorithm for mobile LIDAR using order and neighborhood point information | |
Carballo et al. | High density ground maps using low boundary height estimation for autonomous vehicles | |
Gu et al. | Correction of vehicle positioning error using 3D-map-GNSS and vision-based road marking detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
C17 | Cessation of patent right | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20130320 Termination date: 20130609 |