CN113569652A - Method for detecting short obstacles by automatic parking all-round looking camera - Google Patents
Method for detecting short obstacles by automatic parking all-round looking camera Download PDFInfo
- Publication number
- CN113569652A CN113569652A CN202110737067.3A CN202110737067A CN113569652A CN 113569652 A CN113569652 A CN 113569652A CN 202110737067 A CN202110737067 A CN 202110737067A CN 113569652 A CN113569652 A CN 113569652A
- Authority
- CN
- China
- Prior art keywords
- point
- image
- target
- automatic parking
- short
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 238000012549 training Methods 0.000 claims abstract description 34
- 238000002372 labelling Methods 0.000 claims abstract description 9
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 5
- 238000012216 screening Methods 0.000 claims abstract description 5
- 230000011218 segmentation Effects 0.000 claims description 30
- 238000001514 detection method Methods 0.000 claims description 20
- 239000013598 vector Substances 0.000 claims description 12
- 238000006073 displacement reaction Methods 0.000 claims description 9
- 235000015243 ice cream Nutrition 0.000 claims description 7
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000011176 pooling Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 239000003086 colorant Substances 0.000 claims description 2
- 230000004888 barrier function Effects 0.000 abstract 1
- 239000000523 sample Substances 0.000 description 15
- 230000015572 biosynthetic process Effects 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a method for detecting short obstacles by an automatic parking look-around camera, which collects images around a vehicle by a vehicle-mounted look-around camera, target labeling is carried out on low obstacles in the image, a convolutional neural network structure training model is constructed to train the data of the target labeling respectively to obtain a low obstacle target pixel point set, discretizing the low obstacle target pixel point set, selecting key points, calculating the feature descriptors of the key points, matching key points of the upper frame and the lower frame by using the feature descriptors, screening out successfully matched key points, finally calculating the position difference of a point pair Q successfully matched in the upper image and the lower image by combining the vehicle motion information, calculating the height of a short barrier according to the distance difference, therefore, the type, position and height information of the short obstacle can be effectively detected, and effective information is provided for automatic parking.
Description
Technical Field
The invention relates to the technical field of millimeter wave radars, in particular to a method for detecting short obstacles by using an automatic parking look-around camera.
Background
The automatic parking uses the vehicle-mounted looking-around perception and the detection information of the ultrasonic probe as the basis of the parking control. The obstacle detection and the obstacle ranging need to rely on ultrasonic waves, and large objects such as vehicles and the like can be generally and effectively identified; ground lock, wheel chock, fence, ice cream tube and other short obstacles can cause the ultrasonic wave to be unidentified because of the insufficient echo information of the ultrasonic wave, can seriously reduce the security of the process of parking. In addition, because there is the difference in formation of image and the ultrasonic ranging of camera, the ultrasonic wave does not receive the influence of external conditions such as light, but the formation of image of camera receives the light influence obviously, in the low illumination condition such as night, can lead to the formation of image unusual, also further causes the obstacle to survey information inaccurate.
Disclosure of Invention
In view of the above problems, an object of the present invention is to provide a method for detecting short obstacles by using an automatic parking look-around camera, which detects short obstacles by using a deep learning target detection technique, classifies the short obstacles by using the deep learning technique, and effectively calculates the position and height of the short obstacles by using monocular camera ranging, thereby effectively providing accurate information for automatic parking.
Specifically, the invention provides a method for detecting short obstacles by using an automatic parking all-round looking camera, which comprises the following steps:
s1, acquiring images around the vehicle through the vehicle-mounted all-round-looking camera, and carrying out target marking on short obstacles in the images;
s2: constructing a convolutional neural network structure training model to perform semantic segmentation training and 2D target training on the data labeled by the target respectively;
s3: selecting a pixel point set with the semantic segmentation classification probability in the 2D frame being the same as the maximum probability of the semantic segmentation result according to the 2D target information obtained by training, wherein the pixel point set is the pixel point set of the same low obstacle; discretizing the identified low and short obstacle target pixel point set, selecting key points, and calculating feature descriptors of the key points;
s4: matching key points of an upper frame and a lower frame by using the feature descriptors, and screening out the key points which are successfully matched;
s5: when the vehicle moves, the position difference of the point pair Q which is successfully matched in the upper image and the lower image is calculated by combining the vehicle movement information, and the height of a short obstacle is calculated according to the distance difference.
Wherein, the S1 further includes:
sequentially marking the background, the ground lock, the wheel block, the road edge, the ice cream cone, the parking vertical rod and other low obstacles with colors by adopting a semantic segmentation marking method according to the outline of the target;
and marking the types of the ground lock, the wheel block, the road edge, the ice cream cone, the parking vertical rod and other low obstacles in sequence according to the 2D frame by adopting a 2D target marking method.
Further, the semantic segmentation training includes: acquiring a semantic segmentation annotation training set, wherein the semantic segmentation annotation training set comprises a sample image and color annotation labels, generating a mask image corresponding to the sample image according to classification information by using the color annotation labels, and keeping the sample image and the mask image in the same size; inputting a sample image and a mask image of a semantic segmentation labeling training set into a pre-constructed semantic segmentation model to obtain classification probability corresponding to each pixel of the sample image; and updating parameters of the semantic segmentation model based on the classification probability and the mask map to obtain the trained semantic segmentation model.
The 2D target training comprises: acquiring a 2D target sample training set, and processing the image by adopting a target detection network to obtain a 2D target detection result of the sample image; the target detection network comprises one or more of a backbone network, a characteristic convolution layer, a maximum pooling network, a full connection layer and a regional convolution neural network RCNN; after space search is carried out in the image according to the 2D target detection network, residual calculation is carried out on the 2D target detection network and the labeled 2D frame, and then iterative updating is carried out to obtain an optimal training weight result.
The selecting key points further comprises: and sorting all classified pixel coordinates from small to large according to the Y value, wherein the maximum value and the minimum value of each row of X coordinates after sorting are edge points, and all screened edge points are used as key points.
The calculating the feature descriptors of the key points further comprises: calculating the local feature of each key point by using the local feature descriptor BEBLID to obtain the local texture information of each key point, expressing the local texture information as binary digitalization vectors V in different directions on the image,where n is the dimension of the binary vector v.
The S4 further includes: establishing a corresponding relation according to the descriptors of the two frames of images, calculating different Hamming distances between the numerical vectors of each pair of descriptors, wherein the Hamming distances are calculated by counting the same or different numbers in two groups of binary numerical vectors, and calculating a Hamming weight w according to the same number m:when w is>When 0.8, the two sets of points are considered to be matched, and the matched points are stored as a point pair Q.
The S5 further includes:
the interval between the upper and lower images isCalculating displacements in the lateral and longitudinal directions and displacements in the longitudinal direction from wheel speed information in the motion of the vehicleAndthe formula is as follows:,wherein the vehicle has a velocity vx in the lateral direction and vy in the longitudinal direction; the displacement of the two sets of points in the image isAndthe position difference equation is:
further, in the moving process of the vehicle, if the point pair successfully matched is a point on the ground, no position difference exists; if the distance difference exists, the point pair Q on the matching is not the point on the ground, the relation between the height and the position difference of the point is a linear relation, and the formula is as follows:wherein a is a weight coefficient.
Further, the median position coordinate in the current top view around the point pair Q isCombining the height H of the point, and calculating the real position of the point according to the similar triangle principle(ii) a And sorting the height information of all the matching point pairs from small to large, and selecting the largest point and the smallest point, namely the lowest point and the highest point of the short obstacle.
In summary, the invention provides a method for detecting short obstacles by an automatic parking looking-around camera, which includes collecting images around a vehicle by the vehicle looking-around camera, labeling short obstacles in the images with targets, constructing a convolutional neural network structure training model to train data labeled with the targets respectively to obtain a set of target pixel points of the short obstacles, discretizing the set of target pixel points of the short obstacles, selecting key points, calculating feature descriptors of the key points, matching the key points of the upper and lower frames by using the feature descriptors, screening out successfully matched key points, calculating a position difference of a point pair Q in the upper and lower images in a successful matching manner by combining vehicle motion information, calculating the height of the short obstacles according to the distance difference, thereby effectively detecting the type, position and height information of the short obstacles, effective information is provided for automatic parking.
Drawings
Fig. 1 is a schematic diagram of a method for detecting short obstacles by using an automatic parking all-round camera according to the invention.
Fig. 2 is a graph of the effect of the test using the method described in body 1.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention are clearly and completely described below, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, the present invention provides a method for detecting a short obstacle by an automatic parking looking-around camera, comprising the following steps:
s1, acquiring images around the vehicle through the vehicle-mounted all-round-looking camera, and carrying out target marking on short obstacles in the images;
specifically, a semantic segmentation labeling method is adopted to label the color of a background, a ground lock, a wheel block, a road edge, an ice cream cone, a parking vertical rod and other low obstacles in sequence according to the outline of a target.
And marking the types of the ground lock, the wheel block, the road edge, the ice cream cone, the parking vertical rod and other low obstacles in sequence according to the 2D frame by adopting a 2D target marking method.
S2: constructing a convolutional neural network structure training model to perform semantic segmentation training and 2D target training on the data labeled by the target respectively;
the method specifically comprises the following steps: the semantic segmentation training process comprises the following steps: and acquiring a semantic segmentation labeling training set, wherein the training set comprises a sample image and color labeling labels, and generating a mask image corresponding to the sample image by the color labeling labels according to classification information, wherein the sample image and the mask image keep the same size. Inputting the sample images and the mask images of the training set into a pre-constructed semantic segmentation model to obtain the classification probability corresponding to each pixel of the sample images; and updating parameters of the semantic segmentation model based on the classification probability and the mask map to obtain the trained semantic segmentation model.
The 2D target training process is as follows: and acquiring a 2D target sample training set, and processing the image by adopting a target detection network to obtain a 2D target detection result of the sample image. The target detection network comprises a backbone network, a feature convolution layer, a maximum pooling network, a full connection layer and a regional convolution neural network RCNN. And (3) a training process of the 2D target detection network, namely after space search is carried out in the image according to the 2D target detection network, carrying out iterative update after residual errors are carried out with the labeled 2D frame, and obtaining an optimal training weight result.
S3: selecting a pixel point set with the semantic segmentation classification probability in the 2D frame being the same as the maximum probability of the semantic segmentation result according to the 2D target information obtained by training, wherein the pixel point set is the pixel point set of the same low obstacle; discretizing the identified low and short obstacle target pixel point set, selecting key points, and calculating feature descriptors of the key points;
s4: matching key points of an upper frame and a lower frame by using the feature descriptors, and screening out the key points which are successfully matched;
specifically, the Local feature of each key point is calculated by using a Local feature descriptor (compressed Efficient Binary Local Image descriptor), so as to obtain Local texture information of each key point, which is expressed as a Binary digital vector V in different directions on the Image:where n is the dimension of the binary vector v.
And establishing a corresponding relation according to the descriptors of the two frames of images. Each descriptor in the first frame image is compared with all descriptors in the second frame image using a brute force solution algorithm, and matching is performed using hamming distances, i.e., different hamming distances are calculated between the digitized vectors of each pair of descriptors.
The calculation process of the Hamming distance is to count the same or different numbers in the two groups of binary digitalized vectors, and the Hamming weight w is calculated according to the same number m.
When w > 0.8, the two groups of points are considered to be matched, and the matched points are saved as point pairs Q.
S5: when the vehicle moves, the position difference of the point pair Q which is successfully matched in the upper image and the lower image is calculated by combining the vehicle movement information, and the height of a short obstacle is calculated according to the distance difference.
Specifically, the interval time between the upper and lower images isCalculating displacements in the lateral and longitudinal directions and displacements in the longitudinal direction from wheel speed information in the motion of the vehicleAndthe formula is as follows:,wherein the vehicle has a velocity vx in the lateral direction and vy in the longitudinal direction; the displacement of the two sets of points in the image isAndthe position difference equation is:
in the moving process of the vehicle, if the point pair successfully matched is a point on the ground, no position difference exists; if the distance difference exists, the point pair Q on the matching is not the point on the ground, the relation between the height and the position difference of the point is a linear relation, and the formula is as follows:wherein a is a weight coefficient.
The center position coordinate of the point pair Q in the current top view around isCombining the height H of the point, and calculating the real position of the point according to the similar triangle principle(ii) a And sorting the height information of all the matching point pairs from small to large, and selecting the largest point and the smallest point, namely the lowest point and the highest point of the short obstacle.
Preferably, the above scheme is a process for calculating the height and position of the stationary target, and is also applicable to the moving target after motion compensation, wherein the motion relation of the moving target needs to be estimated in advance.
Fig. 2 shows the effect graph after the test by the method of the present invention, wherein, in the graph, 0.0000(mm) and 717.6146(mm) are the minimum height and the maximum height of the obstacle, and the positions of the minimum height and the maximum height in the looking-around image.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present invention should be defined by the appended claims.
Claims (10)
1. A method for detecting short obstacles by an automatic parking all-round camera is characterized by comprising the following steps:
s1, acquiring images around the vehicle through the vehicle-mounted all-round-looking camera, and carrying out target marking on short obstacles in the images;
s2: constructing a convolutional neural network structure training model to perform semantic segmentation training and 2D target training on the data labeled by the target respectively;
s3: selecting a pixel point set with the semantic segmentation classification probability in the 2D frame being the same as the maximum probability of the semantic segmentation result according to the 2D target information obtained by training, wherein the pixel point set is the pixel point set of the same low obstacle; discretizing the identified low and short obstacle target pixel point set, selecting key points, and calculating feature descriptors of the key points;
s4: matching key points of an upper frame and a lower frame by using the feature descriptors, and screening out the key points which are successfully matched;
s5: when the vehicle moves, the position difference of the point pair Q which is successfully matched in the upper image and the lower image is calculated by combining the vehicle movement information, and the height of a short obstacle is calculated according to the distance difference.
2. The method for detecting a short obstacle with an automatic parking surround camera according to claim 1, wherein said S1 further includes:
sequentially marking the background, the ground lock, the wheel block, the road edge, the ice cream cone, the parking vertical rod and other low obstacles with colors by adopting a semantic segmentation marking method according to the outline of the target;
and marking the types of the ground lock, the wheel block, the road edge, the ice cream cone, the parking vertical rod and other low obstacles in sequence according to the 2D frame by adopting a 2D target marking method.
3. The automatic parking surround camera short obstacle detection method according to claim 2,
acquiring a semantic segmentation annotation training set, wherein the semantic segmentation annotation training set comprises a sample image and color annotation labels, generating a mask image corresponding to the sample image according to classification information by using the color annotation labels, and keeping the sample image and the mask image in the same size; inputting a sample image and a mask image of a semantic segmentation labeling training set into a pre-constructed semantic segmentation model to obtain classification probability corresponding to each pixel of the sample image; and updating parameters of the semantic segmentation model based on the classification probability and the mask map to obtain the trained semantic segmentation model.
4. The automatic parking surround camera short obstacle detection method according to claim 2,
acquiring a 2D target sample training set, and processing the image by adopting a target detection network to obtain a 2D target detection result of the sample image; the target detection network comprises one or more of a backbone network, a characteristic convolution layer, a maximum pooling network, a full connection layer and a regional convolution neural network RCNN; after space search is carried out in the image according to the 2D target detection network, residual calculation is carried out on the 2D target detection network and the labeled 2D frame, and then iterative updating is carried out to obtain an optimal training weight result.
5. The method for detecting short obstacles by using an automatic parking looking-around camera according to claim 1, wherein the selecting a key point further comprises: and sorting all classified pixel coordinates from small to large according to the Y value, wherein the maximum value and the minimum value of each row of X coordinates after sorting are edge points, and all screened edge points are used as key points.
6. The method for detecting short obstacles by using an automatic parking looking-around camera in accordance with claim 1, wherein said calculating feature descriptors of key points further comprises:
calculating the local feature of each key point by using the local feature descriptor BEBLID to obtain the local texture information of each key point, expressing the local texture information as binary digitalization vectors V in different directions on the image,where n is the dimension of the binary vector v.
7. The method for detecting a short obstacle with an automatic parking surround camera according to claim 1, wherein said S4 further includes:
establishing a corresponding relation according to the descriptors of the two frames of images, calculating different Hamming distances between the numerical vectors of each pair of descriptors, wherein the Hamming distances are calculated by counting the same or different numbers in two groups of binary numerical vectors, and calculating a Hamming weight w according to the same number m:when w is>When 0.8, the two sets of points are considered to be matched, and the matched points are stored as a point pair Q.
8. The method for detecting a short obstacle with an automatic parking surround camera according to claim 1, wherein said S5 further includes:
the interval between the upper and lower images isCalculating displacements in the lateral and longitudinal directions and displacements in the longitudinal direction from wheel speed information in the motion of the vehicleAndthe formula is as follows:,wherein the vehicle has a velocity vx in the lateral direction and vy in the longitudinal direction; the displacement of the two sets of points in the image isAndthe position difference equation is:
9. the method for detecting a short obstacle with an automatic parking surround camera according to claim 8, further comprising: in the moving process of the vehicle, if the point pair successfully matched is a point on the ground, no position difference exists; if the distance difference exists, the point pair Q on the matching is not the point on the ground, the relation between the height and the position difference of the point is a linear relation, and the formula is as follows:wherein a is a weight coefficient.
10. The method for detecting a short obstacle with an automatic parking surround camera according to claim 9, further comprising: the center position coordinate of the point pair Q in the current top view around isCombining the height H of the point, and calculating the real position of the point according to the similar triangle principle(ii) a And sorting the height information of all the matching point pairs from small to large, and selecting the largest point and the smallest point, namely the lowest point and the highest point of the short obstacle.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110737067.3A CN113569652A (en) | 2021-06-30 | 2021-06-30 | Method for detecting short obstacles by automatic parking all-round looking camera |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110737067.3A CN113569652A (en) | 2021-06-30 | 2021-06-30 | Method for detecting short obstacles by automatic parking all-round looking camera |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113569652A true CN113569652A (en) | 2021-10-29 |
Family
ID=78163214
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110737067.3A Pending CN113569652A (en) | 2021-06-30 | 2021-06-30 | Method for detecting short obstacles by automatic parking all-round looking camera |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113569652A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114419604A (en) * | 2022-03-28 | 2022-04-29 | 禾多科技(北京)有限公司 | Obstacle information generation method and device, electronic equipment and computer readable medium |
CN114407901A (en) * | 2022-02-18 | 2022-04-29 | 北京小马易行科技有限公司 | Control method and device for automatic driving vehicle and automatic driving system |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109858374A (en) * | 2018-12-31 | 2019-06-07 | 武汉中海庭数据技术有限公司 | Arrow class graticule extraction method and device in high-precision cartography |
CN111274974A (en) * | 2020-01-21 | 2020-06-12 | 北京百度网讯科技有限公司 | Positioning element detection method, device, equipment and medium |
CN111860072A (en) * | 2019-04-30 | 2020-10-30 | 广州汽车集团股份有限公司 | Parking control method and device, computer equipment and computer readable storage medium |
US20210117703A1 (en) * | 2019-10-18 | 2021-04-22 | Toyota Jidosha Kabushiki Kaisha | Road obstacle detection device, road obstacle detection method, and computer-readable storage medium |
-
2021
- 2021-06-30 CN CN202110737067.3A patent/CN113569652A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109858374A (en) * | 2018-12-31 | 2019-06-07 | 武汉中海庭数据技术有限公司 | Arrow class graticule extraction method and device in high-precision cartography |
CN111860072A (en) * | 2019-04-30 | 2020-10-30 | 广州汽车集团股份有限公司 | Parking control method and device, computer equipment and computer readable storage medium |
US20210117703A1 (en) * | 2019-10-18 | 2021-04-22 | Toyota Jidosha Kabushiki Kaisha | Road obstacle detection device, road obstacle detection method, and computer-readable storage medium |
CN111274974A (en) * | 2020-01-21 | 2020-06-12 | 北京百度网讯科技有限公司 | Positioning element detection method, device, equipment and medium |
Non-Patent Citations (2)
Title |
---|
李长城;罗予频;郑晓明;: "基于场景分类的Bird-View泊车辅助方案", 电子技术, no. 10, 25 October 2010 (2010-10-25) * |
陆峰;徐友春;李永乐;王德宇;谢德胜;: "基于信息融合的智能车障碍物检测方法", 计算机应用, no. 2, 20 December 2017 (2017-12-20) * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114407901A (en) * | 2022-02-18 | 2022-04-29 | 北京小马易行科技有限公司 | Control method and device for automatic driving vehicle and automatic driving system |
CN114407901B (en) * | 2022-02-18 | 2023-12-19 | 北京小马易行科技有限公司 | Control method and device for automatic driving vehicle and automatic driving system |
CN114419604A (en) * | 2022-03-28 | 2022-04-29 | 禾多科技(北京)有限公司 | Obstacle information generation method and device, electronic equipment and computer readable medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110175576B (en) | Driving vehicle visual detection method combining laser point cloud data | |
CN107016357B (en) | Video pedestrian detection method based on time domain convolutional neural network | |
Behley et al. | Laser-based segment classification using a mixture of bag-of-words | |
CN111448478B (en) | System and method for correcting high-definition maps based on obstacle detection | |
JP5670413B2 (en) | Road use vulnerable person protection system | |
CN115372958A (en) | Target detection and tracking method based on millimeter wave radar and monocular vision fusion | |
CN107463890B (en) | A kind of Foregut fermenters and tracking based on monocular forward sight camera | |
CN103366154B (en) | Reconfigurable clear path detection system | |
CN114022830A (en) | Target determination method and target determination device | |
CN109919026B (en) | Surface unmanned ship local path planning method | |
CN105654516B (en) | Satellite image based on target conspicuousness is to ground weak moving target detection method | |
JP2016062610A (en) | Feature model creation method and feature model creation device | |
CN112818905B (en) | Finite pixel vehicle target detection method based on attention and spatio-temporal information | |
CN115049700A (en) | Target detection method and device | |
CN105976376B (en) | High-resolution SAR image target detection method based on component model | |
CN113569652A (en) | Method for detecting short obstacles by automatic parking all-round looking camera | |
Limmer et al. | Robust deep-learning-based road-prediction for augmented reality navigation systems at night | |
US20220129685A1 (en) | System and Method for Determining Object Characteristics in Real-time | |
CN105447881B (en) | Radar image segmentation and light stream based on Doppler | |
CN107031661A (en) | A kind of lane change method for early warning and system based on blind area camera input | |
CN114325634A (en) | Method for extracting passable area in high-robustness field environment based on laser radar | |
CN106228570A (en) | A kind of Truth data determines method and apparatus | |
Lyu et al. | Sea-surface object detection based on electro-optical sensors: A review | |
CN117949942B (en) | Target tracking method and system based on fusion of radar data and video data | |
CN112270694B (en) | Method for detecting urban environment dynamic target based on laser radar scanning pattern |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |