CN115966102B - Early warning braking method based on deep learning - Google Patents
Early warning braking method based on deep learning Download PDFInfo
- Publication number
- CN115966102B CN115966102B CN202211731636.4A CN202211731636A CN115966102B CN 115966102 B CN115966102 B CN 115966102B CN 202211731636 A CN202211731636 A CN 202211731636A CN 115966102 B CN115966102 B CN 115966102B
- Authority
- CN
- China
- Prior art keywords
- target
- center
- network
- vehicle
- braking
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 26
- 238000013135 deep learning Methods 0.000 title claims abstract description 22
- 238000013507 mapping Methods 0.000 claims abstract description 10
- 238000012545 processing Methods 0.000 claims abstract description 8
- 238000012549 training Methods 0.000 claims abstract description 7
- 230000006870 function Effects 0.000 claims description 25
- 238000010586 diagram Methods 0.000 claims description 17
- 238000013528 artificial neural network Methods 0.000 claims description 9
- 238000005516 engineering process Methods 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000003384 imaging method Methods 0.000 claims description 4
- 230000008569 process Effects 0.000 claims description 4
- 230000009471 action Effects 0.000 claims description 3
- 230000000694 effects Effects 0.000 claims description 3
- 238000005457 optimization Methods 0.000 claims description 3
- 238000011176 pooling Methods 0.000 claims description 3
- 238000012216 screening Methods 0.000 claims description 3
- 230000008447 perception Effects 0.000 claims description 2
- 238000002372 labelling Methods 0.000 abstract description 3
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 206010039203 Road traffic accident Diseases 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000002592 echocardiography Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Image Analysis (AREA)
Abstract
The invention provides an early warning braking method based on deep learning, wherein an early warning braking system comprises a sensing layer, a decision layer and an application layer; the sensing layer is a front-end visible light camera; the decision layer is a data processing module; the application layer is a brake control unit; the early warning braking method based on deep learning comprises the following steps: s1, a visible light camera is used for collecting and labeling image data, establishing a mapping relation between the image data and vehicle position and distance information and generating a data set; s2, constructing a depth estimation residual error network model, and training the depth estimation residual error network model to obtain final network weights; s3, initializing network weight parameters, sending the original image data into a network to obtain position and distance information of a target, formulating corresponding braking grades, and transmitting corresponding braking commands to a rear-end braking control system. The invention has the characteristics of low cost, convenient deployment, long detectable distance and high accuracy.
Description
Technical Field
The invention relates to the technical field of tracking, detection and collision early warning of automobile targets, in particular to an early warning braking method based on deep learning.
Background
In the road running process of the vehicle, the surrounding environment is changed suddenly, the driver is easy to generate traffic accidents without focusing slightly, most of new energy automobiles are provided with active safety systems at present, and the safety protection of drivers and passengers can be realized by timely early warning and braking on obstacles in a certain distance in front. The implementation of the front collision early warning braking system of the current vehicle mainly comprises the following two modes: laser radar ranging and binocular vision ranging.
The laser radar ranging is implemented by installing a laser radar ranging device in front of a vehicle, emitting laser beams to a target, calculating the distance between the vehicle and the target after receiving echoes by a receiver, and sending out front collision early warning and taking corresponding braking measures when the distance is smaller than a set threshold value. Binocular vision ranging images a target through two visible light cameras in front of the vehicle, parallax of the two images is calculated, and therefore the distance between the target and the vehicle is obtained.
The laser radar has high ranging precision, but also has high cost, the selling price of one laser radar is up to tens of thousands yuan, and the competitiveness of the vehicle in price is reduced. The binocular vision ranging technology requires two visible light cameras, and although the cost is relatively low, the range of the measured distance is too small, the calculation complexity is high, and the accuracy is poor in a ranging scene exceeding 15 meters.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides an early warning braking method based on deep learning.
In order to achieve the above purpose, the present invention adopts the following specific technical scheme:
The early warning braking method based on deep learning is realized by utilizing an early warning braking system, wherein the early warning braking system comprises a perception layer, a decision layer and an application layer; the sensing layer is a front-end visible light camera; the decision layer is a data processing module; the application layer is a brake control unit; the early warning braking method based on deep learning is characterized by comprising the following steps of:
s1, a visible light camera is used for collecting and labeling image data, establishing a mapping relation between the image data and vehicle position and distance information and generating a data set;
s2, a data processing module builds a depth estimation residual error network model, and a data set is subjected to convolution operation to extract corresponding features for combined learning, so that a feature map model of an image is obtained; training the depth estimation residual error network model to obtain final network weights;
S3, initializing network weight parameters, sending the original image data into a network to obtain position and distance information of a target, formulating corresponding braking grades, and transmitting corresponding braking commands to a rear-end braking control system.
Preferably, step S1 comprises the following sub-steps:
s11, a visible light camera is arranged in front of the vehicle, and different imaging conditions are set to acquire image data according to different weather conditions, vehicle distances in front and vehicle types;
s12, marking the collected image data in the running process of the vehicle;
s13, establishing a mapping relation between the image data and the vehicle position and distance information and generating a data set;
s14, taking the image data, the corresponding vehicle position information and the distance information of the vehicle as samples and sending the samples into a data set to generate 200 pieces of marked image data with marked information;
S15, turning, translating and compressing the marked image data through a data set expansion technology, and expanding the marked image data into 10 times of data to construct a final marked data set.
Preferably, step S2 comprises the following sub-steps:
s21, constructing a depth estimation network model, wherein a network of the depth estimation network model is a residual network;
The residual network model utilizes a multi-scale deep convolution neural network to extract vehicle features in images, the images with the size of 416x416 are input into the network, and convolution is carried out through a convolution kernel with the step length of 2 and the size of 7x7x64 to obtain a feature image of 208x 208; then, a pooling layer with the 3x3 size and the step length of 2 and convolution kernels of 3 Conv2_x are processed to obtain an output characteristic diagram of 104x 104; then, 4 Conv3_x convolution kernels are passed to obtain a 52x52 output characteristic diagram; then, the output characteristic diagram of 26x26 is obtained through the convolution kernels of 23 Conv4_x; then, 3 Conv5_x convolution kernels are passed to obtain a 13x13 output characteristic diagram;
S22, training the residual error network model to obtain final network weights;
the loss function used is as follows:
the establishment of the loss function comprises 4 steps:
S221, the square sum loss of the diagonal length of the predicted rectangular frame;
S222, predicting square sum loss of the frame center coordinate and the actual target center coordinate; the weight of the partial loss function is dynamically adjusted by comparing the difference between the central point of the prediction frame and the central point (x center,ycenter) of the image, so that the neural network has better target learning effect on the vicinity of the central point, and the target prediction is more accurate when the neural network is closer to the center of the image;
S223: the square sum loss of the pixel depth is represented, and the depth estimation loss is introduced into a loss function to perform iterative optimization, so that the network can finally predict the depth information corresponding to the current pixel;
s224: expressing the cross entropy loss of the confidence coefficient, and iteratively optimizing the confidence coefficient of the final target by introducing the confidence coefficient loss into a loss function;
Parameters (parameters) A j candidate box representing an i-th grid contains a prediction of whether the target is contained, if soOtherwise, 0; the lambda coord parameter represents the weight coefficient of this partial loss function; diagonal i represents a predicted value of the diagonal length of the rectangular box of the ith grid,A diagonal length true value representing a j-th candidate box of the i-th grid; parameters of the same theoryIndicating whether this candidate box does not contain a prediction of this target,
The horizontal coordinate x, the vertical coordinate y, the width w and the height h of the prediction frame are normalized, and the range is 0-1; representing the width of the j-th rectangular box of the i-th grid, A height of a j-th rectangular frame representing an i-th grid; d (x, y) represents pixel value depth information in (x, y) coordinates; p represents the confidence of the prediction, p i represents the target confidence that the ith grid is responsible for the prediction, and s2 is the number of grids. Preferably, S2 further comprises the sub-steps of:
S23: using a gradient descent strategy to iteratively optimize the value of the loss function, so that the loss function is continuously descended to the optimal point, and the current network weight parameter is saved; at the next prediction, new values of x, y, D (x, y), diagonal, p are derived by substituting into the current network weight parameters.
Preferably, step S3 includes:
s31, generating position information of a target and depth information D (x, y) corresponding to each pixel point according to the network weight;
S32, calculating a pixel depth information mean value in a target area by a target distance Dst (x, y), wherein the calculation method comprises the following steps:
Parameters x center and y center represent coordinates of the center point of the image; Representing taking all x's within w from the image center point abscissa distance, Representing taking all y's within h from the ordinate distance of the center point of the image.
S33, screening out a front vehicle target from a plurality of vehicle targets existing in the image through the following flow;
S331, initializing a center distance variable dmax and a target number num max;
S332, estimating all target traversal depths generated by the network;
s333, calculating the distance dcentre between the center of the target and the center of the image and the target number num when the confidence coefficient of the target is larger than 0.9;
S334, if the distance dcentre between the calculated target center and the image center is larger than the initialized center distance variable dmax; dmax= dcentre, num max=num;
s335, finishing the traversal;
s336, taking a target corresponding to the num max number as a front vehicle target;
S34, obtaining a corresponding braking grade according to the mapping table of the current vehicle speed, the distance between the front vehicle and the braking grade, transmitting a corresponding braking command to a rear-end braking control system, and enabling the braking control system to perform braking action to stop the vehicle.
According to the deep learning-based early warning braking method provided by the invention, a visible light camera is arranged in front of a vehicle, a deep learning neural network technology is used for sensing a front obstacle, the distance between a target and the vehicle is detected, and different braking strategies are adopted according to the relation between the current speed and the distance. The invention has the characteristics of low cost, convenient deployment, long detectable distance and high accuracy.
Drawings
Fig. 1 is a schematic flow diagram of an early warning braking system based on deep learning according to an embodiment of the present invention.
Fig. 2 is a network flow diagram of an early warning braking method based on deep learning according to an embodiment of the present invention.
Detailed Description
Hereinafter, embodiments of the present invention will be described with reference to the accompanying drawings. In the following description, like modules are denoted by like reference numerals. In the case of the same reference numerals, their names and functions are also the same. Therefore, a detailed description thereof will not be repeated.
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not to be construed as limiting the invention.
Fig. 1 shows a schematic flow chart of an early warning braking system based on deep learning according to an embodiment of the invention.
As shown in fig. 1, the system flow diagram provided in the embodiment of the present invention includes a sensing layer, a decision layer, and an application layer.
The sensing layer is the front-end visible light camera, the decision layer is the data processing module, and the application layer is the brake control unit.
Fig. 2 shows a network flow diagram of an early warning braking method based on deep learning according to an embodiment of the present invention.
As shown in fig. 2, the network flow diagram of the early warning braking method based on deep learning provided by the embodiment of the invention is shown.
The early warning braking method based on deep learning comprises the following steps:
S1, a visible light camera is used for collecting and labeling image data, establishing a mapping relation between the image data and vehicle position and distance information and generating a data set; the method comprises the following substeps:
S11, a visible light camera is arranged in front of a vehicle, different imaging conditions are set to collect image data according to different weather conditions, vehicle distances in front and vehicle types, and the diversity requirement of a data set is met;
s12, marking the collected image data in the running process of the vehicle;
S13, establishing a mapping relation between the image data and the vehicle position and distance and generating a data set;
S14, taking the image data, the corresponding vehicle position information and the distance information of the vehicle as samples and sending the samples into a data set to generate 200 pieces of image data with marking information, so as to meet the requirements of diversity of weather, distance, vehicle types and the like.
And S15, finally, turning over, translating and compressing 200 pieces of data through an image data set expansion technology, and expanding to generate 10 times of data serving as a final data set.
The visible light camera adopts a USB camera E12 with a sea Conway view, an RGB three-channel image with an imaging resolution of 1920x1080 is adopted, the visible light camera is responsible for collecting images in front of a vehicle, and then image data collected by the front camera is sent into a neural network for feature extraction and depth estimation in the next step.
S2, sending the data into a data processing module for processing, constructing a depth estimation residual error network model, carrying out convolution operation on a data set, extracting corresponding features, and carrying out combination learning to obtain a feature map model of the image; training the depth estimation residual error network model to obtain final network weights; the method comprises the following substeps:
s21, the depth estimation network skeleton network adopted by the invention is a residual network, and the network architecture is shown in a table 1.
Table 1 network architecture
Inputting an image with the size of 416x416 into a network, and convolving the image with the convolution kernel with the size of 7x7x64 and the step length of 2 to obtain a 208x208 characteristic image; then, a pooling layer with the 3x3 size and the step length of 2 and convolution kernels of 3 Conv2_x are processed to obtain an output characteristic diagram of 104x 104; then, 4 Conv3_x convolution kernels are passed to obtain a 52x52 output characteristic diagram; then, the output characteristic diagram of 26x26 is obtained through the convolution kernels of 23 Conv4_x; then, the output characteristic diagram of 13x13 is obtained through the convolution kernels of 3 Conv5_x.
S22, training the network model to obtain the final network weight.
The loss function used is as follows:
the establishment of the loss function comprises 4 steps:
s221, the square sum loss of the diagonal length of the predicted rectangular frame;
S222, the sum of squares loss of the coordinates of the center of the prediction frame and the coordinates of the center of the actual target; the weight of the partial loss function is dynamically adjusted by comparing the difference value between the central point of the prediction frame and the central point (x center,ycenter) of the image, so that the neural network has better target learning effect on the vicinity of the central point, and the purpose that the target prediction is more accurate when the target is closer to the center of the image is achieved;
S223, representing the square sum loss of the pixel depth, and introducing the depth estimation loss into a loss function to perform iterative optimization, so that the network can finally predict the depth information corresponding to the current pixel;
S224, cross entropy loss representing the confidence coefficient is used for iteratively optimizing the confidence coefficient of the final target by introducing the confidence coefficient loss into the loss function.
Parameters (parameters)A j candidate box representing an i-th grid contains a prediction of whether the target is contained, if soOtherwise, 0; the lambda coord parameter represents the weight coefficient of this partial loss function; diagona i denotes a predicted value of the diagonal length of the rectangular frame of the i-th grid,A diagonal length true value representing a j-th candidate box of the i-th grid; parameters of the same theoryIndicating whether this candidate box does not contain a prediction of this target,
Parameters x, y, w and h respectively represent the transverse and longitudinal central coordinates (x, y) of the prediction frame and the length and width (w, h) of the target frame, and the value is normalized and ranges from 0 to 1; parameters x center and y center represent coordinates of the center point of the image; representing the width of the j-th rectangular box of the i-th grid, A height of a j-th rectangular frame representing an i-th grid; d (x, y) represents the pixel value depth information in the (x, y) coordinates, the parameter p represents the confidence of the prediction, p i represents the target confidence that the ith grid is responsible for the prediction, s2 is the number of grids, i represents the grid, and j represents the rectangular box.
S23, using a gradient descent strategy to iteratively optimize the value of the loss function, so that the loss function is continuously descended to the optimal point, and the current network weight parameters are saved; and when the next prediction is carried out, the parameters are carried into the current network weight parameters to obtain new x, y, diagonal, D (x, y), P and other parameters.
S3, initializing network weight parameters, sending original image data into a network to obtain position and distance information of a target, formulating corresponding braking grades, and sending the position and distance information into an application layer to execute a response strategy; the method comprises the following substeps:
S31, generating position information (x, y, w and h) of a target and depth information D (x, y) corresponding to each pixel point;
S32, calculating a pixel depth information mean value in a target area by a distance Dst (x, y) of the target, wherein the calculation method comprises the following steps:
Parameters x center and y center represent coordinates of the center point of the image; Representing taking all x's within w from the image center point abscissa distance, Representing taking all y's within h from the ordinate distance of the center point of the image.
S33, screening out a front vehicle target from a plurality of vehicle targets existing in the image through the following flow:
S331, initializing a center distance variable dmax and a target number num max;
S332, estimating all target traversal depths generated by the network;
s333, calculating the distance dcentre between the center of the target and the center of the image and the target number num when the confidence coefficient of the target is larger than 0.9;
S334, if the distance dcentre between the calculated target center and the image center is larger than the initialized center distance variable dmax; dmax= dcentre, num max=num;
s335, finishing the traversal;
s336, taking the target corresponding to the num max number as the front vehicle target.
S34, according to the mapping table of the current vehicle speed, the distance between the front vehicle and the braking grade, as shown in the table 2, the corresponding braking grade is obtained, and the corresponding braking command is transmitted to the rear-end braking control system, and the braking control system makes braking action to stop the vehicle.
Table 2 vehicle speed distance and brake level relation map
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
While embodiments of the present invention have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the invention, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the invention.
The above embodiments of the present invention do not limit the scope of the present invention. Any of various other corresponding changes and modifications made according to the technical idea of the present invention should be included in the scope of the claims of the present invention.
Claims (5)
1. The early warning braking method based on deep learning is realized by utilizing an early warning braking system, wherein the early warning braking system comprises a perception layer, a decision layer and an application layer; the sensing layer is a front-end visible light camera; the decision layer is a data processing module; the application layer is a brake control unit; the early warning braking method based on deep learning is characterized by comprising the following steps of:
S1, the visible light camera is used for collecting image data and marking, establishing a mapping relation between the image data and vehicle position and distance information and generating a data set;
S2, the data processing module builds a depth estimation residual error network model, and carries out convolution operation on the data set to extract corresponding features and carries out combination learning to obtain a feature map model of the image; training the depth estimation residual error network model to obtain a final network weight;
S3, initializing network weight parameters, sending the original image data into a network to obtain position and distance information of a target, formulating corresponding braking grades, and transmitting corresponding braking commands to a rear-end braking control system.
2. The deep learning-based early warning braking method according to claim 1, wherein the step S1 comprises the following sub-steps:
s11, a visible light camera is arranged in front of the vehicle, and different imaging conditions are set to acquire image data according to different weather conditions, vehicle distances in front and vehicle types;
s12, marking the collected image data in the running process of the vehicle;
S13, establishing a mapping relation between the image data and the vehicle position and distance information and generating a data set;
s14, taking the image data, the corresponding vehicle position information and the distance information of the vehicle as samples and sending the samples into a data set to generate 200 pieces of marked image data with marked information;
S15, turning, translating and compressing the marked image data through a data set expansion technology, and expanding the marked image data into 10 times of data to construct a final marked data set.
3. The deep learning-based early warning braking method according to claim 1, wherein step S2 comprises the sub-steps of:
S21, constructing a depth estimation network model, wherein a network of the depth estimation network model is a residual network;
The residual network model utilizes a multi-scale deep convolution neural network to extract vehicle features in images, the images with the size of 416x416 are input into the network, convolution is carried out through convolution kernels with the step length of 2 and the size of 7x7x64, and 208x208 feature images are obtained; then, a pooling layer with the 3x3 size and the step length of 2 and convolution kernels of 3 Conv2_x are processed to obtain an output characteristic diagram of 104x 104; then, 4 Conv3_x convolution kernels are passed to obtain a 52x52 output characteristic diagram; then, the output characteristic diagram of 26x26 is obtained through the convolution kernels of 23 Conv4_x; then, 3 Conv5_x convolution kernels are passed to obtain a 13x13 output characteristic diagram;
s22, training the residual error network model to obtain a final network weight;
the loss function used is as follows:
the establishment of the loss function comprises the following steps:
s221, the square sum loss of the diagonal length of the predicted rectangular frame;
S222, the sum of squares loss of the center coordinates of the prediction frame and the center coordinates of the actual target; the weight of the partial loss function is dynamically adjusted by comparing the difference between the central point of the prediction frame and the central point (x center,ycenter) of the image, so that the neural network has better target learning effect on the vicinity of the central point, and the target prediction is more accurate when the neural network is closer to the center of the image;
S223, representing the square sum loss of the pixel depth, and introducing the depth estimation loss into the loss function to perform iterative optimization, so that the network can finally predict the depth information corresponding to the current pixel;
S224, representing cross entropy loss of the confidence coefficient, and iteratively optimizing the confidence coefficient of the final target by introducing the confidence coefficient loss into the loss function;
Parameters (parameters) A j candidate box representing an i-th grid contains a prediction of whether the target is contained, if soOtherwise, 0; the lambda coord parameter represents the weight coefficient of this partial loss function; diagonl i denotes a predicted value of the diagonal length of the rectangular frame of the i-th grid,A diagonal length true value representing a j-th candidate box of the i-th grid; parameters of the same theoryIndicating whether this candidate box does not contain a prediction of this target,
The horizontal coordinate x, the vertical coordinate y, the width w and the height h of the prediction frame are normalized, and the range is 0-1; parameters x center and y center represent coordinates of the center point of the image; representing the width of the j-th rectangular box of the i-th grid, A height of a j-th rectangular frame representing an i-th grid; d (x, y) represents pixel value depth information in (x, y) coordinates; p represents the confidence of the prediction, p i represents the target confidence that the ith grid is responsible for the prediction, and s2 is the number of grids.
4. The deep learning-based early warning braking method according to claim 3, wherein the S2 further comprises the sub-steps of:
S23, iteratively optimizing the value of the loss function by using a gradient descent strategy, so that the loss function is continuously descended to an optimal point, and preserving the current network weight parameter; at the next prediction, the new x, y, diagonal, D (x, y), p values are derived by substituting the current network weight parameters.
5. The deep learning-based early warning braking method according to claim 1, wherein the step S3 includes:
s31, generating position information of a target and depth information D (x, y) corresponding to each pixel point according to the network weight;
S32, calculating a pixel depth information mean value in a target area by a target distance Dst (x, y), wherein the calculation method comprises the following steps:
Parameters x center and y center represent coordinates of the center point of the image; Representing taking all x's within w from the image center point abscissa distance, Representing taking all y with the ordinate distance of the center point of the image within the h range;
s33, screening out a front vehicle target from a plurality of vehicle targets existing in the image through the following flow;
S331, initializing a center distance variable dmax and a target number num max;
S332, estimating all target traversal depths generated by the network;
s333, calculating the distance dcentre between the center of the target and the center of the image and the target number num when the confidence coefficient of the target is larger than 0.9;
S334, if the distance dcentre between the calculated target center and the image center is larger than the initialized center distance variable dmax; dmax= dcentre, num max=num;
s335, finishing the traversal;
s336, taking a target corresponding to the num max number as a front vehicle target;
S34, obtaining a corresponding braking grade according to the mapping table of the current vehicle speed, the distance between the front vehicle and the braking grade, and transmitting a corresponding braking command to a rear-end braking control system, wherein the braking control system makes braking action, and the vehicle stops.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211731636.4A CN115966102B (en) | 2022-12-30 | 2022-12-30 | Early warning braking method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211731636.4A CN115966102B (en) | 2022-12-30 | 2022-12-30 | Early warning braking method based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115966102A CN115966102A (en) | 2023-04-14 |
CN115966102B true CN115966102B (en) | 2024-10-08 |
Family
ID=87357563
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211731636.4A Active CN115966102B (en) | 2022-12-30 | 2022-12-30 | Early warning braking method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115966102B (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022165722A1 (en) * | 2021-02-04 | 2022-08-11 | 华为技术有限公司 | Monocular depth estimation method, apparatus and device |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106599832A (en) * | 2016-12-09 | 2017-04-26 | 重庆邮电大学 | Method for detecting and recognizing various types of obstacles based on convolution neural network |
US11042156B2 (en) * | 2018-05-14 | 2021-06-22 | Honda Motor Co., Ltd. | System and method for learning and executing naturalistic driving behavior |
US11479243B2 (en) * | 2018-09-14 | 2022-10-25 | Honda Motor Co., Ltd. | Uncertainty prediction based deep learning |
CN109377530B (en) * | 2018-11-30 | 2021-07-27 | 天津大学 | Binocular depth estimation method based on depth neural network |
CN110032949B (en) * | 2019-03-22 | 2021-09-28 | 北京理工大学 | Target detection and positioning method based on lightweight convolutional neural network |
US20200363800A1 (en) * | 2019-05-13 | 2020-11-19 | Great Wall Motor Company Limited | Decision Making Methods and Systems for Automated Vehicle |
CN110569739B (en) * | 2019-08-15 | 2022-07-05 | 上海工程技术大学 | Automobile auxiliary system and method based on deep learning and tire noise spectrogram |
CN110738697B (en) * | 2019-10-10 | 2023-04-07 | 福州大学 | Monocular depth estimation method based on deep learning |
US11628858B2 (en) * | 2020-09-15 | 2023-04-18 | Baidu Usa Llc | Hybrid planning system for autonomous vehicles |
CN112349144B (en) * | 2020-11-10 | 2022-04-19 | 中科海微(北京)科技有限公司 | Monocular vision-based vehicle collision early warning method and system |
CN112861729B (en) * | 2021-02-08 | 2022-07-08 | 浙江大学 | Real-time depth completion method based on pseudo-depth map guidance |
CN115145253A (en) * | 2021-03-16 | 2022-10-04 | 广州汽车集团股份有限公司 | End-to-end automatic driving method and system and training method of automatic driving model |
CN115272816A (en) * | 2022-06-23 | 2022-11-01 | 江苏嘉和天盛信息科技有限公司 | Road traffic target tracking method based on deep convolutional neural network |
CN115359474A (en) * | 2022-07-27 | 2022-11-18 | 成都信息工程大学 | Lightweight three-dimensional target detection method, device and medium suitable for mobile terminal |
CN115258862A (en) * | 2022-07-29 | 2022-11-01 | 福建工程学院 | Deep learning-based two-wheel vehicle elevator access prohibiting system and method |
CN115424237A (en) * | 2022-08-16 | 2022-12-02 | 重庆大学 | Forward vehicle identification and distance detection method based on deep learning |
-
2022
- 2022-12-30 CN CN202211731636.4A patent/CN115966102B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022165722A1 (en) * | 2021-02-04 | 2022-08-11 | 华为技术有限公司 | Monocular depth estimation method, apparatus and device |
Also Published As
Publication number | Publication date |
---|---|
CN115966102A (en) | 2023-04-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11532151B2 (en) | Vision-LiDAR fusion method and system based on deep canonical correlation analysis | |
CN110032949B (en) | Target detection and positioning method based on lightweight convolutional neural network | |
CN108983219B (en) | Fusion method and system for image information and radar information of traffic scene | |
CN113610044B (en) | 4D millimeter wave three-dimensional target detection method and system based on self-attention mechanism | |
CN112396650A (en) | Target ranging system and method based on fusion of image and laser radar | |
CN114359181B (en) | Intelligent traffic target fusion detection method and system based on image and point cloud | |
CN111860072A (en) | Parking control method and device, computer equipment and computer readable storage medium | |
CN110097047B (en) | Vehicle detection method based on deep learning and adopting single line laser radar | |
CN113298781B (en) | Mars surface three-dimensional terrain detection method based on image and point cloud fusion | |
CN113449650A (en) | Lane line detection system and method | |
CN105741234A (en) | Three-dimensional panorama look-around based automatic anchoring visual assistance system for unmanned ship | |
CN117237919A (en) | Intelligent driving sensing method for truck through multi-sensor fusion detection under cross-mode supervised learning | |
CN115100741B (en) | Point cloud pedestrian distance risk detection method, system, equipment and medium | |
CN114581748B (en) | Multi-agent perception fusion system based on machine learning and implementation method thereof | |
CN117808689A (en) | Depth complement method based on fusion of millimeter wave radar and camera | |
CN115128628A (en) | Road grid map construction method based on laser SLAM and monocular vision | |
CN116129234A (en) | Attention-based 4D millimeter wave radar and vision fusion method | |
CN115690746A (en) | Non-blind area sensing method and system based on vehicle-road cooperation | |
CN116486396A (en) | 3D target detection method based on 4D millimeter wave radar point cloud | |
CN113888463B (en) | Wheel rotation angle detection method and device, electronic equipment and storage medium | |
CN114648549A (en) | Traffic scene target detection and positioning method fusing vision and laser radar | |
CN114218999A (en) | Millimeter wave radar target detection method and system based on fusion image characteristics | |
CN117423077A (en) | BEV perception model, construction method, device, equipment, vehicle and storage medium | |
CN115966102B (en) | Early warning braking method based on deep learning | |
CN114898144B (en) | Automatic alignment method based on camera and millimeter wave radar data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |