Nothing Special   »   [go: up one dir, main page]

CN116453087A - Automatic driving obstacle detection method of data closed loop - Google Patents

Automatic driving obstacle detection method of data closed loop Download PDF

Info

Publication number
CN116453087A
CN116453087A CN202310328483.7A CN202310328483A CN116453087A CN 116453087 A CN116453087 A CN 116453087A CN 202310328483 A CN202310328483 A CN 202310328483A CN 116453087 A CN116453087 A CN 116453087A
Authority
CN
China
Prior art keywords
vehicle
data
fusion
detection result
obstacle detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310328483.7A
Other languages
Chinese (zh)
Other versions
CN116453087B (en
Inventor
潘焱
梁艳菊
鲁斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuxi Internet Of Things Innovation Center Co ltd
Original Assignee
Wuxi Internet Of Things Innovation Center Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuxi Internet Of Things Innovation Center Co ltd filed Critical Wuxi Internet Of Things Innovation Center Co ltd
Priority to CN202310328483.7A priority Critical patent/CN116453087B/en
Publication of CN116453087A publication Critical patent/CN116453087A/en
Application granted granted Critical
Publication of CN116453087B publication Critical patent/CN116453087B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/096Transfer learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to the technical field of automatic driving, and particularly discloses a data closed-loop automatic driving obstacle detection method, which is applied to a vehicle end and comprises the following steps: front fusion is carried out on the acquired image data, laser radar data and ultrasonic radar data of the surrounding environment of the vehicle so as to obtain front fusion acquisition data; the method comprises the steps of respectively inputting image data, laser radar data, ultrasonic radar data and front fusion acquisition data into corresponding detection models to detect obstacles so as to correspondingly obtain a first obstacle detection result, a second obstacle detection result, a third obstacle detection result and a fourth obstacle detection result, and then carrying out rear fusion on the four obstacle detection results to obtain and output rear fusion obstacle detection results of the surrounding environment of the vehicle.

Description

Automatic driving obstacle detection method of data closed loop
Technical Field
The invention relates to the technical field of automatic driving, in particular to a data closed-loop automatic driving obstacle detection method.
Background
The data labeling of the current automatic driving perception algorithm is often separated in the model training and model detection process, a large amount of original data is firstly required to be collected, then the data set is manually labeled, the model is trained by using the labeled data set, and then the model is deployed on a vehicle industrial personal computer. However, the data sets of automatic driving are often distributed in long tails, all the data sets of all conditions cannot be collected and trained on the model, the designed method uses the data to iterate the model continuously in a data closed loop mode, and the detection precision of the model is improved while the labeling cost is reduced.
The automatic driving obstacle detection algorithm needs to accurately detect obstacles around a vehicle body, and the detection process comprises positioning and classifying the obstacles. Firstly, the current automatic driving obstacle detection algorithm adopts a deep learning-based algorithm, the deep learning-based algorithm is a data-driven supervised learning algorithm, a large amount of well-labeled data is needed to train a deep learning model, and the labeling cost of a data set is too high. Second, the autopilot vehicle is equipped with a number of sensors to collect data about the vehicle body surroundings, including: the detection results of the multiple sensors can be fully utilized for fusion to improve the accuracy of a detection algorithm, but the current automatic driving scheme usually only adopts a fusion strategy, the advantages of the multiple sensors are not fully utilized, the detection accuracy is low, and the environmental adaptability is low.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, and provides a data closed-loop automatic driving obstacle detection method, so as to solve the problems of low detection accuracy and low environmental adaptability caused by the fact that only one fusion strategy is usually adopted in the current automatic driving scheme in the prior art and the advantages of multiple sensors are not fully utilized.
As a first aspect of the present invention, there is provided a data closed-loop automatic driving obstacle detection method applied to a vehicle end, the data closed-loop automatic driving obstacle detection method including:
step S1: respectively acquiring image data, laser radar data and ultrasonic radar data of the surrounding environment of the vehicle;
step S2: front fusion is carried out on the image data, the laser radar data and the ultrasonic radar data of the surrounding environment of the vehicle to obtain front fusion acquisition data of the surrounding environment of the vehicle;
step S3: respectively inputting the image data, the laser radar data, the ultrasonic radar data and the front fusion acquisition data of the surrounding environment of the vehicle into corresponding detection models for detecting obstacles so as to correspondingly obtain a first obstacle detection result, a second obstacle detection result, a third obstacle detection result and a fourth obstacle detection result of the surrounding environment of the vehicle;
step S4: post-fusing the first obstacle detection result, the second obstacle detection result, the third obstacle detection result and the fourth obstacle detection result of the surrounding environment of the vehicle to obtain a post-fused obstacle detection result of the surrounding environment of the vehicle;
step S5: and outputting a post-fusion obstacle detection result of the surrounding environment of the vehicle.
Further, the method further comprises the following steps:
executing the steps S2 to S4 through a vehicle end sensing module of the vehicle end;
uploading the image data, the laser radar data and the ultrasonic radar data of the surrounding environment of the vehicle, which are acquired by the vehicle end, to a cloud end, and storing the image data, the laser radar data and the ultrasonic radar data in the cloud end;
after the cloud end stores the acquired image data, laser radar data and ultrasonic radar data of the surrounding environment of the vehicle, the cloud end sensing module of the cloud end executes the steps S2 to S4 to obtain a post-fusion obstacle detection result of the surrounding environment of the vehicle, which is output by the cloud end;
screening the post-fusion obstacle detection result of the vehicle surrounding environment output by the cloud end to obtain screened training data, and adding the screened training data into a data set;
and respectively adjusting the vehicle end sensing module and the cloud sensing module according to the training data in the data set to obtain an adjusted vehicle end sensing module and an adjusted cloud sensing module.
Further, the adjusting the vehicle-end sensing module and the cloud sensing module according to the training data in the data set to obtain an adjusted vehicle-end sensing module and an adjusted cloud sensing module, further includes:
training the vehicle-end sensing module and the cloud sensing module according to the training data in the data set to obtain a trained vehicle-end sensing module and a trained cloud sensing module; meanwhile, knowledge distillation is carried out on the vehicle-end sensing module by utilizing the cloud sensing module so as to obtain a distilled vehicle-end sensing module;
the trained vehicle end sensing module gamma car1 The trained cloud sensing module gamma cloud The distilled vehicle end sensing module gamma car2 The calculation formulas of (a) are respectively as follows:
wherein, gamma' car ,γ' cloud Respectively representing a vehicle end sensing module before adjustment and a cloud sensing module before adjustment,representing training and knowledge distillation, respectively, and d represents the dataset.
Further, the acquiring the image data, the laser radar data and the ultrasonic radar data of the surrounding environment of the vehicle respectively further includes:
collecting image data of the surrounding environment of the vehicle through a camera;
collecting laser radar data of the surrounding environment of the vehicle through a laser radar;
and acquiring ultrasonic radar data of the surrounding environment of the vehicle through a millimeter wave radar.
Further, in the step S2 and the step S3, the method further includes:
fourth obstacle detection result r of the vehicle surroundings f The calculation formula of (2) is as follows:
r f =H f (F d (d i ,d l ,d r ))
wherein d i ,d l ,d r Respectively representing image data, laser radar data and ultrasonic radar data, F d Representing data pre-fusion, H f Representing a pre-fusion acquisition data detection model r f Representing the obstacle detection result of the pre-fusion acquisition data.
Further, in the step S4 and the step S5, the method further includes:
first obstacle detection result r of the vehicle surroundings i Second obstacle detection result r l Third obstacle detection result r r The calculation formulas of (a) are respectively as follows:
r i =H i (d i )
r l =H l (d f )
r r =H r (d r )
wherein r is i Representative image data d i I.e. the first obstacle detection result; r is (r) l Representing lidar data d l A detection result of the second obstacle; r is (r) r Representative of ultrasonic radar data d r I.e., a third obstacle detection result; h i ,H l ,H r Respectively representing an image data detection model, a laser radar data detection model and an ultrasonic radar data detection model;
the calculation formula of the rear fusion obstacle detection result r of the surrounding environment of the vehicle is as follows:
r=F r (r i ,r l ,r r ,r f )
wherein r is f Representing an obstacle detection result of the pre-fusion acquired data, namely a fourth obstacle detection result; f (F) r Post fusion is represented, and r represents post fusion obstacle detection results.
Further, the method further comprises the following steps:
the method comprises the steps of obtaining post-fusion obstacle detection results of vehicle surrounding environments at different moments, wherein the post-fusion obstacle detection results of the vehicle surrounding environments at each moment are obtained by post-fusion of a first obstacle detection result, a second obstacle detection result, a third obstacle detection result and a fourth obstacle detection result at each moment;
final fusion is carried out on the detection results of the post-fusion obstacle of the surrounding environment of the vehicle at all times so as to obtain the final detection results of the post-fusion obstacle of the surrounding environment of the vehicle;
and outputting a final detection result of the post-fusion obstacle of the surrounding environment of the vehicle.
Further, the method further comprises the following steps:
the detection results of post-fusion obstacles of the surrounding environment of the vehicle at different moments are arranged into sequence data;
detecting the sequence data by adopting LSTM or a transducer to obtain a final detection result R of the post-fusion obstacle in the surrounding environment of the vehicle;
the calculation formula of the final detection result R of the rear fusion obstacle of the surrounding environment of the vehicle is as follows:
R=H LSTM (r T-n ,r T-n+1 ,...,r T-1 ,r T )
or alternatively
R=H Transformer (r T-n ,r T-n+1 ,...,r T-1 ,r T )
Wherein H is LSTM ,H Transformer Respectively, represent the processing of the sequence data by LSTM, transducer, r T-n ,r T-n+1 ,...,r T-1 ,r T The post-fusion obstacle detection results at time T-n, the post-fusion obstacle detection results at time T-n+1, the post-fusion obstacle detection results at time T-1, and the post-fusion obstacle detection results at time T are respectively represented.
The data closed-loop automatic driving obstacle detection method provided by the invention has the following advantages: by adopting two fusion strategies, the advantages of the multiple sensors are fully utilized, the detection precision is improved, and meanwhile, the labeling difficulty of the data set can be reduced again by the multi-sensor fusion mode.
Drawings
The accompanying drawings are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification, illustrate the invention and together with the description serve to explain, without limitation, the invention.
Fig. 1 is a flowchart of a method for detecting an automatic driving obstacle with a closed data loop according to the present invention.
Fig. 2 is a flowchart of an embodiment of a method for detecting an automatic driving obstacle with a closed data loop according to the present invention.
Fig. 3 is a schematic view of the camera provided by the invention mounted on a vehicle body.
Fig. 4 is a schematic diagram of fusion before data acquisition and fusion after detection result provided by the invention.
Fig. 5 is a schematic diagram of fusion after memorizing fusion detection results at all moments.
Detailed Description
It should be noted that, without conflict, the embodiments of the present invention and features of the embodiments may be combined with each other. The invention will be described in detail below with reference to the drawings in connection with embodiments.
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate in order to describe the embodiments of the invention herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In this embodiment, a method for detecting an autopilot obstacle with a closed data loop is provided, and the method is applied to a vehicle end, and fig. 1 is a flowchart of the method for detecting an autopilot obstacle with a closed data loop. As shown in fig. 1, the data closed loop automatic driving obstacle detection method includes:
step S1: respectively acquiring image data, laser radar data and ultrasonic radar data of the surrounding environment of the vehicle;
preferably, the acquiring the image data, the laser radar data and the ultrasonic radar data of the surrounding environment of the vehicle respectively further includes:
collecting image data of the surrounding environment of the vehicle through a camera;
collecting laser radar data of the surrounding environment of the vehicle through a laser radar;
and acquiring ultrasonic radar data of the surrounding environment of the vehicle through a millimeter wave radar.
Specifically, the vehicle body data are collected, and the data collection equipment comprises a camera, a mechanical rotary laser radar and a millimeter wave radar. Because the camera has a fixed visual angle, the camera needs to sense the surrounding environment at 360 degrees, a plurality of cameras are required to be installed on the vehicle body, the installation of the plurality of cameras has no visual field blind area, as shown in fig. 3, and data acquired by the devices are sent to a sensing module at the vehicle end and a data storage module at the cloud end.
In the embodiment of the invention, (1) a vehicle-mounted data acquisition unit (comprising a camera, a laser radar and a millimeter wave radar) acquires data around a vehicle body according to a preset frequency, the data is sent to a vehicle end sensing module, and the vehicle end sends the data to a cloud end. (2) And processing the data acquired by the data acquisition unit by using a perception algorithm of the vehicle end to obtain the spatial position and the category of the obstacle. And sending the detection result to other modules of the automatic driving after post-processing. (3) The cloud perception algorithm model detects data acquired by a vehicle end to obtain the spatial position and the category of the obstacle, and the spatial position and the category are used as the labeling result of the data, and whether a data set is added after auditing. (4) And the vehicle end model and the cloud end model are finely adjusted by data in the training data set. The model of the vehicle end is distilled with a large model of the cloud. And (5) repeating the steps (1) - (4).
Step S2: front fusion is carried out on the image data, the laser radar data and the ultrasonic radar data of the surrounding environment of the vehicle to obtain front fusion acquisition data of the surrounding environment of the vehicle;
step S3: respectively inputting the image data, the laser radar data, the ultrasonic radar data and the front fusion acquisition data of the surrounding environment of the vehicle into corresponding detection models for detecting obstacles so as to correspondingly obtain a first obstacle detection result, a second obstacle detection result, a third obstacle detection result and a fourth obstacle detection result of the surrounding environment of the vehicle;
in the embodiment of the invention, the main function of the vehicle end sensing module is to process the data acquired by the vehicle end to obtain the accurate spatial position and category of the obstacle around the vehicle body. The vehicle end sensing module needs to process image data, laser radar point cloud data and ultrasonic radar data, and the obtained spatial position and category of the obstacle relate to detection and fusion of the deep learning model to each data. The fusion method can be divided into three types, namely pre-fusion, feature fusion and post-fusion. The invention adopts two fusion strategies to improve the detection precision of the obstacle, namely front fusion and rear fusion, and is particularly shown in figure 4.
Preferably, as shown in fig. 4, in the step S2 and the step S3, the image data, the laser radar data, and the ultrasonic radar data are fused at the data level, and then the fused data are sent to the fusion detection model to obtain the detection result, and the process can be described by the following formula. Further comprises:
fourth obstacle detection result r of the vehicle surroundings f The calculation formula of (2) is as follows:
r f =H f (F d (d i ,d l ,d r ))
wherein d i ,d l ,d r Respectively representing image data, laser radar data and ultrasonic radar data, F d Representing data pre-fusion, H f Representing a pre-fusion acquisition data detection model r f Representing the obstacle detection result of the pre-fusion acquisition data.
Specifically, the pre-fusion acquisition data detection model H f Is realized by MV3D algorithm, is a prior art well known to those skilled in the art, and is not described herein.
Specifically, pre-data fusion F d Is realized by IHS transformation, wavelet transformation, principal component analysis transformation (PCT), K-T transformation or deep learning method, which are well known to those skilled in the art and are not described herein.
Step S4: post-fusing the first obstacle detection result, the second obstacle detection result, the third obstacle detection result and the fourth obstacle detection result of the surrounding environment of the vehicle to obtain a post-fused obstacle detection result of the surrounding environment of the vehicle;
step S5: and outputting a post-fusion obstacle detection result of the surrounding environment of the vehicle.
Preferably, as shown in fig. 4, in the step S4 and the step S5, the image data, the laser radar point cloud data, and the ultrasonic radar data are respectively passed through respective detection models to obtain respective detection results. And then fusing the image detection result, the laser radar detection result, the ultrasonic radar detection result and the fusion detection result. The process can be described by the following formula.
First obstacle detection result r of the vehicle surroundings i Second obstacle detection result r l Third obstacle detection result r r The calculation formulas of (a) are respectively as follows:
r i =H i (d i )
r l =H l (d l )
r r =H r (d r )
wherein r is i Representative image data d i I.e. the first obstacle detection result; r is (r) l Representing lidar data d l A detection result of the second obstacle; r is (r) r Representative of ultrasonic radar data d r I.e., a third obstacle detection result; h i ,H l ,H r Respectively representing an image data detection model, a laser radar data detection model and an ultrasonic radar data detection model;
specifically, the image data detection model H i Realized by SMOKE algorithm, laser radar data detection model H l Is realized by a PointPicllar algorithm, and an ultrasonic radar data detection model H r Is realized by a DBSCAN clustering algorithm, is the prior art well known to the person skilled in the art,and will not be described in detail herein.
The calculation formula of the rear fusion obstacle detection result r of the surrounding environment of the vehicle is as follows:
r=F r (r i ,r l ,r r ,r f )
wherein r is f Representing an obstacle detection result of the pre-fusion acquired data, namely a fourth obstacle detection result; f (F) r Post fusion is represented, and r represents post fusion obstacle detection results.
Specifically, post-fusion F r Is realized by a kalman filtering algorithm, is a prior art well known to those skilled in the art, and is not described herein.
In the post-fusion strategy, the detection results are fused by adopting a post-memory fusion method, and the post-memory fusion is shown in fig. 5.
Specifically, as shown in fig. 5, the method further includes:
the method comprises the steps of obtaining post-fusion obstacle detection results of vehicle surrounding environments at different moments, wherein the post-fusion obstacle detection results of the vehicle surrounding environments at each moment are obtained by post-fusion of a first obstacle detection result, a second obstacle detection result, a third obstacle detection result and a fourth obstacle detection result at each moment;
final fusion is carried out on the detection results of the post-fusion obstacle of the surrounding environment of the vehicle at all times so as to obtain the final detection results of the post-fusion obstacle of the surrounding environment of the vehicle;
and outputting a final detection result of the post-fusion obstacle of the surrounding environment of the vehicle.
Specifically, as shown in fig. 5, the horizontal arrow indicates a time axis, the vertical arrow indicates that the post-fusion of the sensor data is performed at a certain moment on the time axis, and the strategy adopts a post-fusion strategy with memory, and further includes:
the detection results of post-fusion obstacle in the surrounding environment of the vehicle at different moments are arranged into sequence data (r T-n ,r T-n+1 ,...,r T-1 ,r T );
Detecting the sequence data by adopting LSTM or a transducer to obtain a final detection result R of the post-fusion obstacle in the surrounding environment of the vehicle; modeling the detection result by adopting the modeling capability of LSTM or a transducer to the sequence data, and finally outputting the fusion result at the current moment. The detection result is fused with the previous detection result, so that the problem of false leakage detection is effectively solved. The post-fusion strategy with memory can be expressed by the following formula.
The calculation formula of the final detection result R of the rear fusion obstacle of the surrounding environment of the vehicle is as follows:
R=H LSTM (r T-n ,r T-n+1 ,...,r T-1 ,r T )
or alternatively
R=H Transformer (r T-n ,r T-n+1 ,...,r T-1 ,r T )
Wherein H is LSTM ,H Transformer The LSTM, transducer, is used to process the sequence data, and both schemes have a strong ability to model the sequence data, depending on the actual situation. LSTM is typically used with limited computing power at the edge (vehicle) and transfomer at the cloud. r is (r) T-n ,r T-n+1 ,…,r T-1 ,r T The detection results of the post-fusion obstacle at the time T-n, the detection results of the post-fusion obstacle at the time T-n+1, …, the detection results of the post-fusion obstacle at the time T-1 and the detection results of the post-fusion obstacle at the time T are respectively shown.
In the embodiment of the invention, the vehicle end sensing module adopts LSTM, and the cloud sensing module adopts a transducer. LSTM (Long Short-term memory) is a Long-Short-term memory network, a type of time-recurrent neural network, suitable for processing and predicting important events with relatively Long intervals and delays in a time series. The transducer is a model that uses the attention mechanism to increase the model training speed. Both LSTM and transducer are well known to those skilled in the art and will not be described in detail herein.
Preferably, as shown in fig. 2, further comprising:
executing the steps S2 to S4 through a vehicle end sensing module of the vehicle end;
uploading the image data, the laser radar data and the ultrasonic radar data of the surrounding environment of the vehicle, which are acquired by the vehicle end, to a cloud end, and storing the image data, the laser radar data and the ultrasonic radar data in the cloud end;
the data transmitted to the cloud end by the vehicle end is stored through the hard disk.
After the cloud end stores the acquired image data, laser radar data and ultrasonic radar data of the surrounding environment of the vehicle, the cloud end sensing module of the cloud end executes the steps S2 to S4 to obtain a post-fusion obstacle detection result of the surrounding environment of the vehicle, which is output by the cloud end;
screening the post-fusion obstacle detection result of the vehicle surrounding environment output by the cloud end to obtain screened training data, and adding the screened training data into a data set;
and respectively adjusting the vehicle end sensing module and the cloud sensing module according to the training data in the data set to obtain an adjusted vehicle end sensing module and an adjusted cloud sensing module.
It should be noted that, at the vehicle end, the computing power of the chip is limited, so the computing speed and accuracy of the algorithm are comprehensively considered when the model is selected. At the cloud, the computing resources are sufficient, the algorithm accuracy is considered firstly by the algorithm selection principle, the accuracy is taken as a first principle, and the computing speed is considered. In the invention, the cloud sensing module has the same structure as the sensing module at the vehicle end, and the difference is that the cloud selects a large model (including a backbone network with higher calculation complexity, a deeper network and the like).
Because the sensing module of the cloud is a large model, the detection accuracy is very high, and the generalization capability is very strong, so that the detection result can be used as the original information of the data annotation, and the cloud large model is adopted to automatically annotate the data. And adding the automatically marked result into the data set after manual screening.
Specifically, the adjusting the vehicle end sensing module and the cloud sensing module according to the training data in the data set to obtain an adjusted vehicle end sensing module and an adjusted cloud sensing module, further includes:
training the vehicle-end sensing module and the cloud sensing module according to the training data in the data set to obtain a trained vehicle-end sensing module and a trained cloud sensing module; meanwhile, knowledge distillation is carried out on the vehicle-end sensing module by utilizing the cloud sensing module so as to obtain a distilled vehicle-end sensing module;
the effect of adjustment has two, first, adopts new dataset to finely tune car end perception module and high in the clouds perception module to improve the detection accuracy of car end perception module and high in the clouds perception module. Secondly, knowledge distillation is carried out on a vehicle-end sensing module by utilizing a cloud-end sensing large model, the cloud-end large model is used as a teacher model, and the vehicle-end sensing module is used as a student model, so that the accuracy of the vehicle-end sensing module is improved. Can be described by the following formula. The first two formulas represent fine tuning of the car end model and the cloud end model respectively, and the third formula represents knowledge distillation. The fine tuning and the knowledge distillation can be freely selected according to actual conditions, and the scheme adopts the fine tuning and the knowledge distillation at the same time. Through continuous automatic labeling, data are added into the data set, and then the model is adjusted through a trainer, so that the detection capability of the model is continuously improved.
The trained vehicle end sensing module gamma car1 The trained cloud sensing module gamma cloud The distilled vehicle end sensing module gamma car2 The calculation formulas of (a) are respectively as follows:
wherein, gamma' car ,γ' cloud Respectively representing a vehicle end sensing module before adjustment and a cloud sensing module before adjustment,representing training and knowledge distillation, respectively, and d represents the dataset.
It should be noted that knowledge distillation: the principle of FitNet is to train a narrow and deep student network model with a shallow and wide teacher network model. The method has the main purpose of migrating the relevant knowledge in a trained large and complex model to a small and simple student model so as to adapt to occasions with high requirements on computing capacity and computing efficiency.
In the embodiment of the invention, two fusion strategies, namely pre-fusion and post-fusion, are adopted to improve the detection precision of the sensing module.
In the embodiment of the invention, a post-fusion strategy with memory is provided, and the strategy adopts the modeling capability of a cyclic neural network on a sequence to fuse time detection results, so that the problems of omission, false detection and the like can be effectively solved.
In the embodiment of the invention, the data set is updated on the basis of automatic labeling by using the cloud module, the module is innovatively updated by adopting fine adjustment and knowledge distillation, and the detection capability of the module is continuously improved by continuously adding new data.
In the embodiment of the invention, the data is automatically marked, the data set is updated, and the new data set is used for updating the module, so that the problem of long tail distribution of the automatic driving data can be effectively solved.
The data closed-loop automatic driving obstacle detection method provided by the invention adopts two fusion strategies to fully mine the advantages of multiple sensors, improves the obstacle detection precision, and simultaneously effectively solves the problems of difficult labeling of data sets, high labeling cost and model updating iteration in automatic driving.
It is to be understood that the above embodiments are merely illustrative of the application of the principles of the present invention, but not in limitation thereof. Various modifications and improvements may be made by those skilled in the art without departing from the spirit and substance of the invention, and are also considered to be within the scope of the invention.

Claims (8)

1. The data closed-loop automatic driving obstacle detection method is applied to a vehicle end and is characterized by comprising the following steps of:
step S1: respectively acquiring image data, laser radar data and ultrasonic radar data of the surrounding environment of the vehicle;
step S2: front fusion is carried out on the image data, the laser radar data and the ultrasonic radar data of the surrounding environment of the vehicle to obtain front fusion acquisition data of the surrounding environment of the vehicle;
step S3: respectively inputting the image data, the laser radar data, the ultrasonic radar data and the front fusion acquisition data of the surrounding environment of the vehicle into corresponding detection models for detecting obstacles so as to correspondingly obtain a first obstacle detection result, a second obstacle detection result, a third obstacle detection result and a fourth obstacle detection result of the surrounding environment of the vehicle;
step S4: post-fusing the first obstacle detection result, the second obstacle detection result, the third obstacle detection result and the fourth obstacle detection result of the surrounding environment of the vehicle to obtain a post-fused obstacle detection result of the surrounding environment of the vehicle;
step S5: and outputting a post-fusion obstacle detection result of the surrounding environment of the vehicle.
2. The data closed loop automatic driving obstacle detection method according to claim 1, further comprising:
executing the steps S2 to S4 through a vehicle end sensing module of the vehicle end;
uploading the image data, the laser radar data and the ultrasonic radar data of the surrounding environment of the vehicle, which are acquired by the vehicle end, to a cloud end, and storing the image data, the laser radar data and the ultrasonic radar data in the cloud end;
after the cloud end stores the acquired image data, laser radar data and ultrasonic radar data of the surrounding environment of the vehicle, the cloud end sensing module of the cloud end executes the steps S2 to S4 to obtain a post-fusion obstacle detection result of the surrounding environment of the vehicle, which is output by the cloud end;
screening the post-fusion obstacle detection result of the vehicle surrounding environment output by the cloud end to obtain screened training data, and adding the screened training data into a data set;
and respectively adjusting the vehicle end sensing module and the cloud sensing module according to the training data in the data set to obtain an adjusted vehicle end sensing module and an adjusted cloud sensing module.
3. The method for detecting an automatic driving obstacle according to claim 2, wherein the adjusting the vehicle-end sensing module and the cloud sensing module according to the training data in the dataset to obtain an adjusted vehicle-end sensing module and an adjusted cloud sensing module, further comprises:
training the vehicle-end sensing module and the cloud sensing module according to the training data in the data set to obtain a trained vehicle-end sensing module and a trained cloud sensing module; meanwhile, knowledge distillation is carried out on the vehicle-end sensing module by utilizing the cloud sensing module so as to obtain a distilled vehicle-end sensing module;
the trained vehicle end sensing module gamma car1 The trained cloud sensing module gamma cloud The distilled vehicle end sensing module gamma car2 The calculation formulas of (a) are respectively as follows:
wherein, gamma' car ,γ' cloud Respectively representing a vehicle end sensing module before adjustment and a cloud sensing module before adjustment,representing training and knowledge distillation, respectively, and d represents the dataset.
4. The method for detecting an automated driving obstacle according to claim 1, wherein the acquiring of the image data, the lidar data, and the ultrasonic radar data of the surrounding environment of the vehicle, respectively, further comprises:
collecting image data of the surrounding environment of the vehicle through a camera;
collecting laser radar data of the surrounding environment of the vehicle through a laser radar;
and acquiring ultrasonic radar data of the surrounding environment of the vehicle through a millimeter wave radar.
5. The method for detecting a closed-loop data autopilot obstacle according to claim 1, wherein in step S2 and step S3, further comprising:
fourth obstacle detection result r of the vehicle surroundings f The calculation formula of (2) is as follows:
r f =H f (F d (d i ,d l ,d r ))
wherein d i ,d l ,d r Respectively representing image data, laser radar data and ultrasonic radar data, F d Representing data pre-fusion, H f Representing a pre-fusion acquisition data detection model r f Representing the number of pre-fusion acquisitionsAnd detecting the obstacle according to the detection result.
6. The method for detecting a data closed loop automatic driving obstacle according to claim 5, wherein in step S4 and step S5, further comprising:
first obstacle detection result r of the vehicle surroundings i Second obstacle detection result r l Third obstacle detection result r r The calculation formulas of (a) are respectively as follows:
r i =H i (d i )
r l =H l (d l )
r r =H r (d r )
wherein r is i Representative image data d i I.e. the first obstacle detection result; r is (r) l Representing lidar data d l A detection result of the second obstacle; r is (r) r Representative of ultrasonic radar data d r I.e., a third obstacle detection result; h i ,H l ,H r Respectively representing an image data detection model, a laser radar data detection model and an ultrasonic radar data detection model;
the calculation formula of the rear fusion obstacle detection result r of the surrounding environment of the vehicle is as follows:
r=F r (r i ,r l ,r r ,r f )
wherein r is f Representing an obstacle detection result of the pre-fusion acquired data, namely a fourth obstacle detection result; f (F) r Post fusion is represented, and r represents post fusion obstacle detection results.
7. The data closed-loop automatic driving obstacle detection method according to claim 6, further comprising:
the method comprises the steps of obtaining post-fusion obstacle detection results of vehicle surrounding environments at different moments, wherein the post-fusion obstacle detection results of the vehicle surrounding environments at each moment are obtained by post-fusion of a first obstacle detection result, a second obstacle detection result, a third obstacle detection result and a fourth obstacle detection result at each moment;
final fusion is carried out on the detection results of the post-fusion obstacle of the surrounding environment of the vehicle at all times so as to obtain the final detection results of the post-fusion obstacle of the surrounding environment of the vehicle;
and outputting a final detection result of the post-fusion obstacle of the surrounding environment of the vehicle.
8. The data closed loop automatic driving obstacle detection method according to claim 7, further comprising:
the detection results of post-fusion obstacles of the surrounding environment of the vehicle at different moments are arranged into sequence data;
detecting the sequence data by adopting LSTM or a transducer to obtain a final detection result R of the post-fusion obstacle in the surrounding environment of the vehicle;
the calculation formula of the final detection result R of the rear fusion obstacle of the surrounding environment of the vehicle is as follows:
R=H LSTM (r T-n ,r T-n+1 ,…,r T-1 ,r T )
or alternatively
R=H Transformer (r T-n ,r T-n+1 ,…,r T-1 ,r T )
Wherein H is LSTM ,H Transformer Respectively, represent the processing of the sequence data by LSTM, transducer, r T-n ,r T-n+1 ,…,r T-1 ,r T The detection results of the post-fusion obstacle at the time T-n, the detection results of the post-fusion obstacle at the time T-n+1, …, the detection results of the post-fusion obstacle at the time T-1 and the detection results of the post-fusion obstacle at the time T are respectively shown.
CN202310328483.7A 2023-03-30 2023-03-30 Automatic driving obstacle detection method of data closed loop Active CN116453087B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310328483.7A CN116453087B (en) 2023-03-30 2023-03-30 Automatic driving obstacle detection method of data closed loop

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310328483.7A CN116453087B (en) 2023-03-30 2023-03-30 Automatic driving obstacle detection method of data closed loop

Publications (2)

Publication Number Publication Date
CN116453087A true CN116453087A (en) 2023-07-18
CN116453087B CN116453087B (en) 2023-10-20

Family

ID=87121252

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310328483.7A Active CN116453087B (en) 2023-03-30 2023-03-30 Automatic driving obstacle detection method of data closed loop

Country Status (1)

Country Link
CN (1) CN116453087B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116665025A (en) * 2023-07-31 2023-08-29 福思(杭州)智能科技有限公司 Data closed-loop method and system

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108229366A (en) * 2017-12-28 2018-06-29 北京航空航天大学 Deep learning vehicle-installed obstacle detection method based on radar and fusing image data
CN109270524A (en) * 2018-10-19 2019-01-25 禾多科技(北京)有限公司 Based on unpiloted multi-data fusion obstacle detector and its detection method
CN112526520A (en) * 2019-08-29 2021-03-19 中车株洲电力机车研究所有限公司 Pedestrian and obstacle prompting system
CN113111905A (en) * 2021-02-25 2021-07-13 上海水齐机器人有限公司 Obstacle detection method integrating multiline laser radar and ultrasonic data
US20210263159A1 (en) * 2019-01-15 2021-08-26 Beijing Baidu Netcom Science and Technology Co., Ltd. Beijing Baidu Netcom Science and Technology Information processing method, system, device and computer storage medium
WO2021226921A1 (en) * 2020-05-14 2021-11-18 Harman International Industries, Incorporated Method and system of data processing for autonomous driving
CN113885062A (en) * 2021-09-28 2022-01-04 中国科学技术大学先进技术研究院 Data acquisition and fusion equipment, method and system based on V2X
WO2022022694A1 (en) * 2020-07-31 2022-02-03 北京智行者科技有限公司 Method and system for sensing automated driving environment
CN115187964A (en) * 2022-09-06 2022-10-14 中诚华隆计算机技术有限公司 Automatic driving decision-making method based on multi-sensor data fusion and SoC chip
CN115236673A (en) * 2022-06-15 2022-10-25 北京踏歌智行科技有限公司 Multi-radar fusion sensing system and method for large vehicle

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108229366A (en) * 2017-12-28 2018-06-29 北京航空航天大学 Deep learning vehicle-installed obstacle detection method based on radar and fusing image data
CN109270524A (en) * 2018-10-19 2019-01-25 禾多科技(北京)有限公司 Based on unpiloted multi-data fusion obstacle detector and its detection method
US20210263159A1 (en) * 2019-01-15 2021-08-26 Beijing Baidu Netcom Science and Technology Co., Ltd. Beijing Baidu Netcom Science and Technology Information processing method, system, device and computer storage medium
CN112526520A (en) * 2019-08-29 2021-03-19 中车株洲电力机车研究所有限公司 Pedestrian and obstacle prompting system
WO2021226921A1 (en) * 2020-05-14 2021-11-18 Harman International Industries, Incorporated Method and system of data processing for autonomous driving
WO2022022694A1 (en) * 2020-07-31 2022-02-03 北京智行者科技有限公司 Method and system for sensing automated driving environment
CN113111905A (en) * 2021-02-25 2021-07-13 上海水齐机器人有限公司 Obstacle detection method integrating multiline laser radar and ultrasonic data
CN113885062A (en) * 2021-09-28 2022-01-04 中国科学技术大学先进技术研究院 Data acquisition and fusion equipment, method and system based on V2X
CN115236673A (en) * 2022-06-15 2022-10-25 北京踏歌智行科技有限公司 Multi-radar fusion sensing system and method for large vehicle
CN115187964A (en) * 2022-09-06 2022-10-14 中诚华隆计算机技术有限公司 Automatic driving decision-making method based on multi-sensor data fusion and SoC chip

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DI SONG等: "Data and Decision Level Fusion-Based Crack Detection for Compressor Blade Using Acoustic and Vibration Signal", 《IEEE SENSORS COUNCIL 》, pages 12209 - 12218 *
WEI-HSUAN CHANG等: "Research on Data Fusion of Positioning System with a Fault Detection Mechanism for Autonomous Vehicles", 《APPLIED SCIENCES》, pages 1 - 15 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116665025A (en) * 2023-07-31 2023-08-29 福思(杭州)智能科技有限公司 Data closed-loop method and system
CN116665025B (en) * 2023-07-31 2023-11-14 福思(杭州)智能科技有限公司 Data closed-loop method and system

Also Published As

Publication number Publication date
CN116453087B (en) 2023-10-20

Similar Documents

Publication Publication Date Title
US10817731B2 (en) Image-based pedestrian detection
Maddern et al. 1 year, 1000 km: The oxford robotcar dataset
US10255525B1 (en) FPGA device for image classification
Geiger et al. Vision meets robotics: The kitti dataset
CN116453087B (en) Automatic driving obstacle detection method of data closed loop
Wang et al. Realtime wide-area vehicle trajectory tracking using millimeter-wave radar sensors and the open TJRD TS dataset
CN112951000A (en) Large-scale vehicle blind area bidirectional early warning system
US20220396281A1 (en) Platform for perception system development for automated driving system
US20220318464A1 (en) Machine Learning Data Augmentation for Simulation
Rajamohan et al. MAARGHA: A prototype system for road condition and surface type estimation by fusing multi-sensor data
CN115662166B (en) Automatic driving data processing method and automatic driving traffic system
Miclea et al. Visibility enhancement and fog detection: Solutions presented in recent scientific papers with potential for application to mobile systems
Krajewski et al. Using drones as reference sensors for neural-networks-based modeling of automotive perception errors
CN114842660B (en) Unmanned lane track prediction method and device and electronic equipment
Llorca et al. Sensors and sensing for intelligent vehicles
Naso et al. Autonomous flight insurance method of unmanned aerial vehicles Parot Mambo using semantic segmentation data
Rangkuti et al. Development of Vehicle Detection and Counting Systems with UAV Cameras: Deep Learning and Darknet Algorithms
Porębski et al. Performance evaluation of the highway radar occupancy grid
Azad Deep learning based drone localization and payload detection using vision data
Chen LiDAR-Based Semantic Perception for Autonomous Vehicles
Bhayekar Truck ADAS Deep Learning Based Object Tracking Algorithm Testing Using MIL Co-Simulation Data and Images
WO2023178510A1 (en) Image processing method, device, and system and movable platform
Zbala et al. Image Classification for Autonomous Vehicles
Avenash A Heterogeneous Sensor Fusion Framework for Obstacle Detection in Piloted UAVs
Patel Master of Applied Science in Electrical and Computer Engineering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant