CN116912788A - Attack detection method, device and equipment for automatic driving system and storage medium - Google Patents
Attack detection method, device and equipment for automatic driving system and storage medium Download PDFInfo
- Publication number
- CN116912788A CN116912788A CN202310584658.0A CN202310584658A CN116912788A CN 116912788 A CN116912788 A CN 116912788A CN 202310584658 A CN202310584658 A CN 202310584658A CN 116912788 A CN116912788 A CN 116912788A
- Authority
- CN
- China
- Prior art keywords
- dimensional
- detection frame
- target
- array
- information data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 368
- 238000000034 method Methods 0.000 claims abstract description 19
- 230000004927 fusion Effects 0.000 claims abstract description 18
- 239000011159 matrix material Substances 0.000 claims description 15
- 230000008859 change Effects 0.000 claims description 10
- 238000004458 analytical method Methods 0.000 claims description 5
- 230000004807 localization Effects 0.000 claims description 2
- 230000009466 transformation Effects 0.000 abstract description 4
- 238000004364 calculation method Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 8
- 230000006872 improvement Effects 0.000 description 6
- 239000013598 vector Substances 0.000 description 5
- 230000008569 process Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000003213 activating effect Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000011217 control strategy Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000008685 targeting Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an attack detection method, device, equipment and storage medium of an automatic driving system, wherein the method comprises the steps of respectively carrying out target detection on three-dimensional information data and two-dimensional information data acquired by a three-dimensional sensor and a two-dimensional sensor to obtain a first three-dimensional detection frame and a first two-dimensional detection frame, carrying out fusion target detection on the three-dimensional information data and the two-dimensional information data to obtain a second three-dimensional detection frame, carrying out coordinate transformation on the second three-dimensional detection frame to obtain a second two-dimensional detection frame, carrying out IoU numerical computation on the first two-dimensional detection frame and the second two-dimensional detection frame corresponding to each object to obtain a first array, carrying out IoU numerical computation on the first three-dimensional detection frame and the second three-dimensional detection frame corresponding to each object to obtain a second array, and analyzing the first array and the second array to determine whether the sensor is attacked or not. The invention can perform attack detection based on multi-mode data and can locate an attacked sensor.
Description
Technical Field
The present application relates to the field of sensor detection technologies, and in particular, to a method, an apparatus, a device, and a storage medium for detecting an attack of an autopilot system.
Background
Unmanned technology is a comprehensive technology related to many front-edge technologies such as artificial intelligence, sensing technology, map technology and computers. The functional implementation of unmanned technology relies on an on-board positioning sensor system. The positioning sensor system mainly comprises a laser radar, a visual camera, a millimeter wave radar, a Global Positioning System (GPS) and other devices. The positioning sensor system provides abundant positioning data, namely vehicle speed, attitude, position and other information for a planning and decision-making module of the unmanned automobile. The safety of automatic driving automobile route planning and decision control is based on the safety of a positioning sensor system, if the positioning sensor system of the automatic driving automobile is abnormal, the sensor can acquire wrong positioning information, and then a wrong driving control strategy is planned, so that the safety of other vehicles, drivers and pedestrians are threatened.
At present, a sensor attack detection mode of an unmanned technology is mainly combined with a specific single-sensor target detection algorithm and attack detection is carried out on the basis of the specific single-sensor target detection algorithm, but the mode has no universality, for example, an attack detection method matched with a target detection algorithm mainly based on a laser radar cannot be suitable for an attack detection scene of a target detection algorithm mainly based on a camera, and the accuracy of an attack detection result aiming at a detection mode of a single-type sensor is not high.
Disclosure of Invention
In view of the above, the present application provides an attack detection method, apparatus, device and storage medium for an autopilot system, so as to solve the problems of the existing autopilot system that the attack detection method does not have universality and has low accuracy.
In order to solve the technical problems, the application adopts a technical scheme that: provided is an attack detection method of an automatic driving system, including: respectively acquiring three-dimensional information data and two-dimensional information data of a target area by using a three-dimensional sensor and a two-dimensional sensor; respectively inputting the three-dimensional information data and the two-dimensional information data into a pre-trained three-dimensional detection model and a pre-trained two-dimensional detection model to obtain a first three-dimensional detection frame and a first two-dimensional detection frame of each target object in a target area; inputting the three-dimensional information data and the two-dimensional information data into a pre-trained fusion detection model to obtain a second three-dimensional detection frame of each target object in the target area, and carrying out coordinate system change on the second three-dimensional detection frame to obtain a second two-dimensional detection frame; calculating first IoU values of a first two-dimensional detection frame and a second two-dimensional detection frame corresponding to each target object to obtain a first array, and calculating second IoU values of the first three-dimensional detection frame and the second three-dimensional detection frame corresponding to each target object to obtain a second array; the first array and the second array are used for carrying out attack detection and positioning the attacked sensor.
As a further improvement of the present application, before inputting the three-dimensional information data and the two-dimensional information data into the fusion detection model trained in advance, the method further includes: carrying out chi-square comparison on the two-dimensional information data at the previous moment and the two-dimensional information data at the current moment to obtain chi-square comparison values; judging whether the chi-square comparison value exceeds a preset threshold value or not; when the chi-square comparison value exceeds a preset threshold, performing histogram matching on the two-dimensional information data at the current moment and the two-dimensional information data at the previous moment to obtain enhanced two-dimensional information data at the current moment, and replacing the two-dimensional information data at the current moment; and when the chi-square comparison value does not exceed the preset threshold value, maintaining the two-dimensional information data at the current moment.
As a further improvement of the present application, the coordinate system change is performed on the second three-dimensional detection frame to obtain a second two-dimensional detection frame, including: acquiring three-dimensional center point coordinates of a second three-dimensional detection frame; confirming homogeneous coordinates of four points on the second three-dimensional detection frame, which are on the same plane with the three-dimensional center point, by utilizing the coordinates of the three-dimensional center point; calculating four two-dimensional coordinate points by using a camera projection matrix, a camera rotation matrix, a rotation matrix from a sensor to a camera coordinate system and homogeneous coordinates which are acquired in advance; and constructing a second two-dimensional detection frame by using the four two-dimensional coordinate points.
As a further improvement of the present application, calculating a first IoU value of a first two-dimensional detection frame and a second two-dimensional detection frame corresponding to each target object to obtain a first array includes: confirming a target first two-dimensional detection frame and a target second two-dimensional detection frame corresponding to each target object; respectively calculating a first area and a second area of the target first two-dimensional detection frame and a target second two-dimensional detection frame and a third area of an overlapping area of the target first two-dimensional detection frame and the target second two-dimensional detection frame; calculating a first IoU value corresponding to each target object by using the first area, the second area and the third area; and constructing a first array by using the first IoU values corresponding to all the target objects.
As a further improvement of the present application, calculating the second IoU values of the first three-dimensional detection frame and the second three-dimensional detection frame corresponding to each target object to obtain a second group includes: confirming a target first three-dimensional detection frame and a target second three-dimensional detection frame corresponding to each target object; respectively calculating a first volume and a second volume of a first three-dimensional target detection frame and a second three-dimensional target detection frame; calculating the bottom surface area of the overlapping area of the target first three-dimensional detection frame and the target second three-dimensional detection frame; confirming the height of an overlapping area of the target first three-dimensional detection frame and the target second three-dimensional detection frame; calculating a third volume of the overlapped area by using the bottom surface area and the height; calculating a second IoU value corresponding to each target object by using the first volume, the second volume and the third volume; and constructing a second array by using the second IoU values corresponding to all the target objects.
As a further improvement of the present application, a sensor for attack detection and localization of an attack based on a first array and a second array, includes: judging whether the first array and the second array have outliers or not respectively; when the first array has an outlier and the second array does not have the outlier, confirming that the three-dimensional sensor is attacked; when the first array has no outlier and the second array has outlier, confirming that the two-dimensional sensor is attacked; when the first array and the second array both have outliers, confirming that the three-dimensional sensor and/or the two-dimensional sensor are attacked; and when the first array and the second array have no outlier, confirming that the three-dimensional sensor and the two-dimensional sensor are not attacked.
As a further improvement of the application, the three-dimensional sensor comprises a laser radar, the two-dimensional sensor comprises a camera, the three-dimensional detection model is constructed based on a PointPicella algorithm, the two-dimensional detection model is constructed based on a YOLOv3 algorithm, and the fusion detection model is constructed based on an AVOD algorithm.
In order to solve the technical problems, the application adopts another technical scheme that: provided is an attack detection device for an automatic driving system, comprising: the acquisition module is used for respectively acquiring three-dimensional information data and two-dimensional information data of the target area by using the three-dimensional sensor and the two-dimensional sensor; the first detection module is used for inputting the three-dimensional information data and the two-dimensional information data into a pre-trained three-dimensional detection model and a pre-trained two-dimensional detection model respectively to obtain a first three-dimensional detection frame and a first two-dimensional detection frame of each target object in the target area; the second detection module is used for inputting the three-dimensional information data and the two-dimensional information data into a pre-trained fusion detection model to obtain a second three-dimensional detection frame of each target object in the target area, and carrying out coordinate system change on the second three-dimensional detection frame to obtain a second two-dimensional detection frame; the computing module is used for computing a first IoU value of the first two-dimensional detection frame and the second two-dimensional detection frame corresponding to each target object to obtain a first array, and computing a second IoU value of the first three-dimensional detection frame and the second three-dimensional detection frame corresponding to each target object to obtain a second array; and the analysis module is used for carrying out attack detection and positioning the attacked sensor based on the first array and the second array.
In order to solve the technical problems, the application adopts a further technical scheme that: there is provided a computer device comprising a processor, a memory coupled to the processor, the memory having stored therein program instructions that, when executed by the processor, cause the processor to perform the steps of the method of attack detection of an autopilot system as defined in any one of the preceding claims.
In order to solve the technical problems, the application adopts a further technical scheme that: a storage medium is provided that stores program instructions that enable the attack detection method of the automated driving system according to any one of the above.
The beneficial effects of the application are as follows: according to the attack detection method of the automatic driving system, the three-dimensional information data and the two-dimensional information data acquired by the three-dimensional sensor and the two-dimensional sensor are subjected to target detection respectively to obtain the first three-dimensional detection frame and the first two-dimensional detection frame, the three-dimensional information data and the two-dimensional information data are subjected to fusion target detection to obtain the second three-dimensional detection frame, the second three-dimensional detection frame is subjected to coordinate transformation to obtain the second two-dimensional detection frame, the first two-dimensional detection frame and the second two-dimensional detection frame corresponding to each object are subjected to IoU numerical computation to obtain the first array, the first three-dimensional detection frame and the second three-dimensional detection frame corresponding to each object are subjected to IoU numerical computation to obtain the second array, whether the first array and the second array are attacked or not is analyzed, the correlation of the three-dimensional sensor and the two-dimensional sensor in time and space is utilized to carry out attack detection, so that the accuracy of attack detection is improved, the attack detection can be realized by utilizing the three-dimensional sensor and the two-dimensional sensor serving as basic configuration of the automatic driving automobile, and the attack detection method is not limited to be used for carrying out attack detection with higher target detection.
Drawings
FIG. 1 is a flow chart of an attack detection method of an autopilot system according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a second three-dimensional inspection box according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a two-dimensional detection frame overlapping region according to an embodiment of the present application;
FIG. 4 is a functional block diagram of an attack detection device of an autopilot system according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a computer device according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a storage medium according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terms "first," "second," "third," and the like in this disclosure are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first", "a second", and "a third" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise. All directional indications (such as up, down, left, right, front, back … …) in embodiments of the present application are merely used to explain the relative positional relationship, movement, etc. between the components in a particular gesture (as shown in the drawings), and if the particular gesture changes, the directional indication changes accordingly. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
Fig. 1 is a flowchart of an attack detection method of an autopilot system according to an embodiment of the present application. It should be noted that, if there are substantially the same results, the method of the present application is not limited to the flow sequence shown in fig. 1. As shown in fig. 1, the attack detection method of the autopilot system includes the steps of:
step S101: and respectively acquiring three-dimensional information data and two-dimensional information data of the target area by using a three-dimensional sensor and a two-dimensional sensor.
In this embodiment, the three-dimensional sensor is preferably a laser radar, and it should be noted that the three-dimensional sensor in the embodiment of the present application is not limited to the laser radar, and other sensor devices that can be used to obtain three-dimensional information data are also included in the protection scope of the present application. The two-dimensional sensor is preferably a camera, and it should be noted that the two-dimensional sensor in the embodiment of the application is not limited to the camera, and other sensor devices that can be used to obtain two-dimensional information data are also included in the protection scope of the application.
Specifically, the present embodiment simultaneously collects three-dimensional information data and two-dimensional information data of a target area with a three-dimensional sensor and a two-dimensional sensor equipped on a vehicle when the vehicle is driven using an automated driving system. When the attack detection is performed, the detection range is limited to a target area, and the target area is a part where the fields of view of the three-dimensional sensor and the two-dimensional sensor overlap, for example, for a laser radar and a camera, the overlapping area is a Field of view (FOV) view of the laser radar, and the three-dimensional information data collected by the laser radar and the two-dimensional information data collected by the camera are obtained by projecting the image size captured by the camera into the visual area of the laser radar.
Step S102: and respectively inputting the three-dimensional information data and the two-dimensional information data into a pre-trained three-dimensional detection model and a pre-trained two-dimensional detection model to obtain a first three-dimensional detection frame and a first two-dimensional detection frame of each target object in the target area.
Wherein the three-dimensional detection model is constructed based on the PointPiclar algorithm. Specifically, the three-dimensional sensor uses the PointPicloras (3D Object Proposal Generation and Detection from Point Cloud) algorithm for three-dimensional targeting Detecting to obtain a first three-dimensional target detection frame, wherein input data are three-dimensional information data which are original data point cloud data of a three-dimensional sensor, and the formats are [ x, y, z, intensity ]]. The original features of a three-dimensional sensor are point-cloud clusters, which can be expressed as vectors(64-line laser radar, 1800 point clouds per line scan) where X represents the abscissa of the point cloud on the horizontal plane, Y represents the ordinate of the point cloud on the horizontal plane, Z represents the height of the point cloud, and intensity represents the reflection intensity of the point cloud. Using a two-stage mode, using PointNet++ as a backbone network, firstly completing a segment task, judging the label of each three-dimensional point, and using a feature generation frame for each point divided into the foreground. The box is then optimized. Vector for target object detection result> And (3) representing. Wherein x is lidar =[X v Y v Z v ] T Representing the central point value of the target object on the three-dimensional sensor coordinate system, [ L, W, H ]] T Representing the length, width and height of the first three-dimensional detection frame.
The two-dimensional detection model is constructed based on a YOLOv3 algorithm. Specifically, a YOLOv3 algorithm is used for carrying out two-dimensional target detection on two-dimensional information data acquired by a two-dimensional sensor to obtain a target center point. For example, taking a camera as an example, an acquired picture is input into a two-dimensional detection model in an RGB format, the two-dimensional detection model calls a deep convolutional neural network algorithm to extract features of the RGB image, common structures in the neural network algorithm comprise a convolutional layer, a pooling layer, an activating layer, a dropout layer, a BN (batch normalization) layer, a full-connection layer and the like, and finally the features extracted from the picture can effectively describe information of a target object. The input data format is RGB image of camera, the output is center point coordinate and detecting frame width (width) and height (height) on image coordinate system, the format is Wherein X is camrra Representing the value in the x-direction on the image coordinate system, Y camera Representing the value in the y-direction, w representing the width of the first two-dimensional detection frame and h representing the height of the first two-dimensional detection frame. After the first two-dimensional detection frame is obtained, coordinates of four points of the first two-dimensional detection frame can be further obtained as follows:
X camera,1 ,=X camera,3 ,=X camera -w/2;
X camera,2 ,=X camera,4 ,=X camera +w/2;
Y camera,1 ,=Y camera,2 ,=Y camera +h/2;
Y camera,3 ,=Y camera,4 ,=Y camera -h/2;
wherein X is camera,i I=1, 2,3,4 denote the coordinate values in the x direction of the picture coordinate system of the first two-dimensional detection frame upper left corner, upper right corner, lower left corner, lower right corner, respectively. Y is Y camera,i I=1, 2,3,4 denote the coordinate values in the y direction of the picture coordinate system of the first two-dimensional detection frame upper left corner, upper right corner, lower left corner, lower right corner, respectively.
Step S103: inputting the three-dimensional information data and the two-dimensional information data into a pre-trained fusion detection model to obtain a second three-dimensional detection frame of each target object in the target area, and carrying out coordinate system change on the second three-dimensional detection frame to obtain a second two-dimensional detection frame.
Specifically, the fusion detection model is constructed based on an AVOD algorithm (Joint 3D Proposal Generation and Object Detection from ViewAggregation), and after three-dimensional information data and two-dimensional information data are obtained, the three-dimensional information data and the two-dimensional information data are input into the AVOD algorithm to carry out target detection, so that a second three-dimensional detection frame is obtained. The AVOD algorithm fuses the three-dimensional information data and the two-dimensional information data, and when the three-dimensional information data is the point cloud data information acquired by the laser radar, the AVOD algorithm only uses the top view and the front view of the point cloud, so that the calculated amount can be reduced, and excessive information is not lost. And then generating a three-dimensional candidate region, fusing the characteristics and the candidate region, and outputting a final second three-dimensional detection frame.
Further, in order to improve the defensive ability of the two-dimensional sensor, in some embodiments, step S103 further includes:
1. and carrying out chi-square comparison on the two-dimensional information data at the previous moment and the two-dimensional information data at the current moment to obtain a chi-square comparison value.
2. Judging whether the chi-square comparison value exceeds a preset threshold value.
3. When the chi-square comparison value exceeds a preset threshold, performing histogram matching on the two-dimensional information data at the current moment and the two-dimensional information data at the last moment to obtain enhanced two-dimensional information data at the current moment, and replacing the two-dimensional information data at the current moment.
4. And when the chi-square comparison value does not exceed the preset threshold value, maintaining the two-dimensional information data at the current moment.
In this embodiment, the purpose of image enhancement is to improve the defensive ability of the fusion detection model to two-dimensional sensor attacks. And carrying out chi-square comparison on the two-dimensional information data at the previous moment and the two-dimensional information data at the current moment to obtain a chi-square comparison value, and judging whether the chi-square comparison value exceeds a preset threshold value. When the chi-square comparison value exceeds a preset threshold, performing histogram matching on the two-dimensional information data at the current moment and the two-dimensional information data at the last moment to obtain enhanced two-dimensional information data at the current moment, and replacing the two-dimensional information data at the current moment. And when the chi-square comparison value does not exceed the preset threshold value, maintaining the two-dimensional information data at the current moment.
Further, in step S103, the step of performing coordinate system change on the second three-dimensional detection frame to obtain the second two-dimensional detection frame specifically includes:
1. and acquiring the coordinates of the three-dimensional center point of the second three-dimensional detection frame.
2. And confirming homogeneous coordinates of four points on the second three-dimensional detection frame, which are on the same plane with the three-dimensional center point, by utilizing the coordinates of the three-dimensional center point.
3. And calculating four two-dimensional coordinate points by using a camera projection matrix, a camera rotation matrix, a rotation matrix from a sensor to a camera coordinate system and homogeneous coordinates which are acquired in advance.
4. And constructing a second two-dimensional detection frame by using the four two-dimensional coordinate points.
Specifically, in the present embodiment, 4 corner points (including only x-coordinates and y-coordinates) and 2 heights are used to describe a second three-dimensional detection frame, which is shown in fig. 2 as a vectorRepresentation, wherein c i =[x ci y ci ] T I=1, 2,3,4 represents the coordinates of the second three-dimensional detection frame 4 vertices in the x and y directions, h 1 The height h of the bottom surface of the second three-dimensional detection frame from a plane formed by the x axis and the y axis in the three-dimensional coordinate system 2 The height of the top surface of the second three-dimensional detection frame from a plane formed by an x axis and a y axis in the three-dimensional coordinate system is shown.
Specifically, after the second three-dimensional detection frame is obtained, the second three-dimensional detection frame is projected to an image coordinate system, and the second two-dimensional detection frame is obtained. Referring to fig. 2, the vector form of the second three-dimensional detection frame is first converted into a conventional three-dimensional detection frame format Y 3D fusion detection =[X v Y v Z v ] T Wherein:
Z v =(h 2 -h 1 )/2;
X v =x c1 -x c2 ;
Y v =y c1 -y c2 ;
wherein x is ci Representation c i X-axis coordinates, y of the point ci Representation c i Y-axis coordinates of points, i=1, 2,3,4, [ X ] v Y v Z v ] T Representing the coordinates of the center point of the second three-dimensional inspection frame.
Then, four points (A, B, C, D points in fig. 2) near the center point of the second three-dimensional detection frame are sequentially transformed into four points in the image coordinate system, and the specific transformation process is as follows:
Y point A homogeneous coordinates =[X v,1 ,Y v,1 ,Z v,1 ] T =[X v -(x c2 -x c1 )/2,Y v ,h2];
Y Homogeneous coordinates of point B =[X v,2 ,Y v,2 ,Z v,2 ] T =[X v +(x c2 -x c1 )/2,Y v ,h2];
Y Point C homogeneous coordinates =[X v,3 ,Y v,3 ,Z v,3 ] T =[X v -(x c2 -x c1 )/2,Y v ,h1];
Y Point D homogeneous coordinates =[X v,4 ,Y v,4 ,Z v,4 ] T =[X v +(x c2 -x c1 )/2,Y v ,h1];
Y=P*R*Tr v elo t o c am*Y Point n homogeneous coordinates ,n=A,B,C,D;
Wherein Y represents the coordinates of the second two-dimensional detection frame, Y Point n homogeneous coordinates The homogeneous vector representing the target object, P is the camera projection matrix, R is the camera rotation matrix, tr v elo t o c am is a rotation matrix of a 3x4 three-dimensional coordinate system to a camera coordinate system.
Step S104: and calculating a first IoU value of the first two-dimensional detection frame and the second two-dimensional detection frame corresponding to each target object to obtain a first array, and calculating a second IoU value of the first three-dimensional detection frame and the second three-dimensional detection frame corresponding to each target object to obtain a second array.
In particular, ioU values can be used to characterize inconsistencies between two sets of test frames. And IoU numerical calculation is performed by using the first two-dimensional detection frame and the second two-dimensional detection frame to obtain a first IoU numerical value representing the inconsistency of the first two-dimensional detection frame and the second two-dimensional detection frame, and a plurality of target objects correspond to a plurality of first IoU numerical values and are constructed into a first array. And IoU numerical calculation is carried out by using the first three-dimensional detection frame and the second three-dimensional detection frame to obtain a second IoU numerical value representing the inconsistency of the first three-dimensional detection frame and the second three-dimensional detection frame, and a plurality of target objects correspond to a plurality of second IoU numerical values and are constructed into a second array. It should be noted that, the two groups of detection frames correspond to the same target area, and the target object of the target area is constant, so that the number of elements in the first array is equal to that of the second array, and each target object has a corresponding element in the first array or the second array.
Further, in step S104, a step of calculating a first IoU value of the first two-dimensional detection frame and the second two-dimensional detection frame corresponding to each target object to obtain a first array specifically includes:
1. and confirming a target first two-dimensional detection frame and a target second two-dimensional detection frame corresponding to each target object.
Specifically, since there are multiple target objects in the scene and multiple detection frames exist, when the target first two-dimensional detection frame and the target second two-dimensional detection frame corresponding to each target object are confirmed, one first two-dimensional detection frame is selected as the target first two-dimensional detection frame, the Euclidean distance between the center point of the target first two-dimensional detection frame and the center point of each second two-dimensional detection frame is calculated, and the second two-dimensional detection frame with the smallest Euclidean distance is selected as the target second two-dimensional detection frame.
2. And respectively calculating a first area and a second area of the target first two-dimensional detection frame and the target second two-dimensional detection frame and a third area of an overlapping area of the target first two-dimensional detection frame and the target second two-dimensional detection frame.
Specifically, coordinates of four vertexes of the two-dimensional detection frame are obtained, and the first area and the second area of the target first two-dimensional detection frame and the target second two-dimensional detection frame are obtained through calculation of the coordinates of the four vertexes. The calculation formula is as follows:
S A =|x a2 -x a1 |×|y a2 -y a1 |;
S B =|x b2 -x b1 |×|y b2 -y b1 |;
Wherein S is A Represents a first area, A1 (x a1 ,y a1 )、B1(x a1 ,y a2 )、C1(x a2 ,y a1 )、D1(x a2 ,y a2 ) Representing the coordinates of the four vertices of the target first two-dimensional detection box. S is S B Represents a second area, A2 (x b1 ,y b1 )、B2(x b1 ,y b2 )、C2(x b2 ,y b1 )、D2(x b2 ,y b2 ) Representing the coordinates of four vertices of the target second two-dimensional detection frame.
For example, as illustrated in fig. 3, when two detection frames have an overlapping area, the overlapping area is also a rectangular frame, four vertexes of the rectangular frame can be obtained according to four vertex coordinates of the overlapped target first two-dimensional detection frame and the target second two-dimensional detection frame, and the upper left corner coordinate is A2 (x b1 ,y b1 ) The lower left corner coordinates are E (x b1 ,y a2 ) The upper right corner coordinates are F (x a2 ,y b1 ) The lower right corner coordinates are D1 (x a2 ,y a2 ). And calculating a third area of the overlapping area by using four vertex coordinates of the overlapping area.
3. And calculating to obtain a first IoU value corresponding to each target object by using the first area, the second area and the third area.
Specifically, the first IoU value=third area/(first area+second area—third area).
4. And constructing a first array by using the first IoU values corresponding to all the target objects.
Specifically, the first array is expressed as: i= [ I ] 1 ,i 2 ,…,i n ]N is the number of first IoU values.
Further, in step S104, a step of calculating a second IoU value of the first three-dimensional detection frame and the second three-dimensional detection frame corresponding to each target object to obtain a second array specifically includes:
1. And confirming a target first three-dimensional detection frame and a target second three-dimensional detection frame corresponding to each target object.
Specifically, the target first three-dimensional detection frame and the target second three-dimensional detection frame are also confirmed using the euclidean distance between the coordinates of the center points of the first three-dimensional detection frame and the second three-dimensional detection frame.
2. And respectively calculating a first volume and a second volume of the target first three-dimensional detection frame and the target second three-dimensional detection frame.
Specifically, the length, width and height of the three-dimensional detection frame can be confirmed according to the coordinates of 8 vertexes of the three-dimensional detection frame, and the volume of the three-dimensional detection frame is obtained through calculation by using the length, width and height.
3. And calculating the bottom surface area of the overlapping area of the target first three-dimensional detection frame and the target second three-dimensional detection frame.
4. And confirming the height of the overlapping area of the target first three-dimensional detection frame and the target second three-dimensional detection frame.
5. And calculating a third volume of the overlapped area by using the bottom surface area and the height.
It should be understood that the overlapping area of the first three-dimensional detection frame and the second three-dimensional detection frame is also a square structure, the volume of the overlapping area can be obtained by multiplying the bottom area by the height, and the target objects corresponding to the first three-dimensional detection frame and the second three-dimensional detection frame are all on the ground, that is, the bottom surfaces of the first three-dimensional detection frame and the second three-dimensional detection frame are on the same plane, so that the vertex coordinates of the bottom surfaces of the overlapping area of the first three-dimensional detection frame and the second three-dimensional detection frame can be confirmed according to the four vertex coordinates of the bottom surfaces of the first three-dimensional detection frame and the four vertex coordinates of the second three-dimensional detection frame, and then the bottom area of the overlapping area can be obtained according to the vertex coordinates of the bottom surfaces of the overlapping area. For the height of the overlapping area, the target objects corresponding to the first three-dimensional detection frame and the second three-dimensional detection frame are on the ground, namely the bottom surfaces of the first three-dimensional detection frame and the second three-dimensional detection frame are on the same plane, so that the height of the overlapping area of the first three-dimensional detection frame and the second three-dimensional detection frame is a smaller height value in the first three-dimensional detection frame and the second three-dimensional detection frame. After the bottom area and the height of the overlapping area of the first three-dimensional detection frame and the second three-dimensional detection frame are obtained, the third volume of the overlapping area can be calculated.
6. And calculating a second IoU value corresponding to each target object by using the first volume, the second volume and the third volume.
Specifically, the second IoU value=third volume/(first volume+second volume-third volume)
7. And constructing a second array by using the second IoU values corresponding to all the target objects.
Specifically, the first array is expressed as: u= [ U ] 1 ,u 2 ,…,u n ]N is the number of the second IoU numerical value.
Step S105: the first array and the second array are used for carrying out attack detection and positioning the attacked sensor.
Specifically, after the first array and the second array are obtained, inconsistency detection is performed on the two arrays, so that whether the sensor is attacked or not is confirmed, and whether the attacked three-dimensional sensor or the attacked two-dimensional sensor is positioned.
Further, step S105 specifically includes:
1. and judging whether the first array and the second array have outliers or not respectively.
2. And when the first array has an outlier and the second array does not have the outlier, confirming that the three-dimensional sensor is attacked.
3. And when the first array has no outlier and the second array has outlier, confirming that the two-dimensional sensor is attacked.
4. And when the first array and the second array both have outliers, confirming that the three-dimensional sensor and/or the two-dimensional sensor are attacked.
5. And when the first array and the second array have no outlier, confirming that the three-dimensional sensor and the two-dimensional sensor are not attacked.
According to the attack detection method of the automatic driving system, the three-dimensional information data and the two-dimensional information data acquired by the three-dimensional sensor and the two-dimensional sensor are subjected to target detection respectively to obtain the first three-dimensional detection frame and the first two-dimensional detection frame, the three-dimensional information data and the two-dimensional information data are subjected to fusion target detection to obtain the second three-dimensional detection frame, the second three-dimensional detection frame is subjected to coordinate transformation to obtain the second two-dimensional detection frame, the first two-dimensional detection frame and the second two-dimensional detection frame corresponding to each object are subjected to IoU numerical calculation to obtain the first array, the first three-dimensional detection frame and the second three-dimensional detection frame corresponding to each object are subjected to IoU numerical calculation to obtain the second array, and whether the first array and the second array are attacked or not is analyzed, so that the correlation of the three-dimensional sensor and the two-dimensional sensor in time and space is utilized to carry out attack detection, the accuracy of attack detection is improved, the attack detection can be realized by utilizing the three-dimensional sensor and the two-dimensional sensor serving as basic configuration of the automatic driving automobile, the first sensor and the second sensor is not utilized to be used as a main sensor, and the target detection is more suitable for carrying out attack detection.
Fig. 4 is a functional block diagram of an attack detection device of an autopilot system according to an embodiment of the present invention. As shown in fig. 4, the attack detection device 20 of the autopilot system includes an acquisition module 21, a first detection module 22, a second detection module 23, a calculation module 24, and an analysis module 25.
An acquisition module 21, configured to acquire three-dimensional information data and two-dimensional information data of the target area by using a three-dimensional sensor and a two-dimensional sensor, respectively;
the first detection module 22 is configured to input three-dimensional information data and two-dimensional information data into a pre-trained three-dimensional detection model and a pre-trained two-dimensional detection model respectively, so as to obtain a first three-dimensional detection frame and a first two-dimensional detection frame of each target object in the target area;
the second detection module 23 is configured to input three-dimensional information data and two-dimensional information data into a pre-trained fusion detection model, obtain a second three-dimensional detection frame of each target object in the target area, and perform coordinate system change on the second three-dimensional detection frame to obtain a second two-dimensional detection frame;
the calculating module 24 is configured to calculate a first IoU value of the first two-dimensional detection frame and the second two-dimensional detection frame corresponding to each target object to obtain a first array, and calculate a second IoU value of the first three-dimensional detection frame and the second three-dimensional detection frame corresponding to each target object to obtain a second array;
An analysis module 25 for attack detection and locating the attacked sensor based on the first array and the second array.
Optionally, before the second detection module 23 performs the operation of inputting the three-dimensional information data and the two-dimensional information data into the pre-trained fusion detection model, the second detection module is further configured to: carrying out chi-square comparison on the two-dimensional information data at the previous moment and the two-dimensional information data at the current moment to obtain chi-square comparison values; judging whether the chi-square comparison value exceeds a preset threshold value or not; when the chi-square comparison value exceeds a preset threshold, performing histogram matching on the two-dimensional information data at the current moment and the two-dimensional information data at the previous moment to obtain enhanced two-dimensional information data at the current moment, and replacing the two-dimensional information data at the current moment; and when the chi-square comparison value does not exceed the preset threshold value, maintaining the two-dimensional information data at the current moment.
Optionally, the second detection module 23 performs an operation of performing a coordinate system change on the second three-dimensional detection frame to obtain the second two-dimensional detection frame, which specifically includes: acquiring three-dimensional center point coordinates of a second three-dimensional detection frame; confirming homogeneous coordinates of four points on the second three-dimensional detection frame, which are on the same plane with the three-dimensional center point, by utilizing the coordinates of the three-dimensional center point; calculating four two-dimensional coordinate points by using a camera projection matrix, a camera rotation matrix, a rotation matrix from a sensor to a camera coordinate system and homogeneous coordinates which are acquired in advance; and constructing a second two-dimensional detection frame by using the four two-dimensional coordinate points.
Optionally, the calculating module 24 performs an operation of calculating the first IoU values of the first two-dimensional detection frame and the second two-dimensional detection frame corresponding to each target object to obtain the first array, which specifically includes: confirming a target first two-dimensional detection frame and a target second two-dimensional detection frame corresponding to each target object; respectively calculating a first area and a second area of the target first two-dimensional detection frame and a target second two-dimensional detection frame and a third area of an overlapping area of the target first two-dimensional detection frame and the target second two-dimensional detection frame; calculating a first IoU value corresponding to each target object by using the first area, the second area and the third area; and constructing a first array by using the first IoU values corresponding to all the target objects.
Optionally, the calculating module 24 performs an operation of calculating the second IoU values of the first three-dimensional detection frame and the second three-dimensional detection frame corresponding to each target object to obtain a second array, which specifically includes: confirming a target first three-dimensional detection frame and a target second three-dimensional detection frame corresponding to each target object; respectively calculating a first volume and a second volume of a first three-dimensional target detection frame and a second three-dimensional target detection frame; calculating the bottom surface area of the overlapping area of the target first three-dimensional detection frame and the target second three-dimensional detection frame; confirming the height of an overlapping area of the target first three-dimensional detection frame and the target second three-dimensional detection frame; calculating a third volume of the overlapped area by using the bottom surface area and the height; calculating a second IoU value corresponding to each target object by using the first volume, the second volume and the third volume; and constructing a second array by using the second IoU values corresponding to all the target objects.
Optionally, the analysis module 25 performs operations of detecting an attack and locating an attacked sensor based on the first array and the second array, specifically including: judging whether the first array and the second array have outliers or not respectively; when the first array has an outlier and the second array does not have the outlier, confirming that the three-dimensional sensor is attacked; when the first array has no outlier and the second array has outlier, confirming that the two-dimensional sensor is attacked; when the first array and the second array both have outliers, confirming that the three-dimensional sensor and/or the two-dimensional sensor are attacked; and when the first array and the second array have no outlier, confirming that the three-dimensional sensor and the two-dimensional sensor are not attacked.
Optionally, the three-dimensional sensor comprises a laser radar, the two-dimensional sensor comprises a camera, the three-dimensional detection model is built based on a PointPiclar algorithm, the two-dimensional detection model is built based on a Yolov3 algorithm, and the fusion detection model is built based on an AVOD algorithm.
Further details regarding the implementation of the technical solution of the modules in the attack detection device of the autopilot system according to the above embodiments, the description of the attack detection method of the autopilot system in the above embodiment may be referred to, and will not be repeated here.
It should be noted that, in the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described as different from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other. For the apparatus class embodiments, the description is relatively simple as it is substantially similar to the method embodiments, and reference is made to the description of the method embodiments for relevant points.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a computer device according to an embodiment of the invention. As shown in fig. 5, the computer device 30 includes a processor 31 and a memory 32 coupled to the processor 31, where the memory 32 stores program instructions that, when executed by the processor 31, cause the processor 31 to perform the steps of the attack detection method of the autopilot system according to any one of the embodiments.
The processor 31 may also be referred to as a CPU (Central Processing Unit ). The processor 31 may be an integrated circuit chip with signal processing capabilities. The processor 31 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a storage medium according to an embodiment of the present application. The storage medium according to the embodiment of the present application stores the program instructions 41 capable of implementing the attack detection method of the autopilot system, where the program instructions 41 may be stored in the storage medium in the form of a software product, and include several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, an optical disk, or other various media capable of storing program codes, or a computer device such as a computer, a server, a mobile phone, a tablet, or the like.
In the several embodiments provided in the present application, it should be understood that the disclosed computer apparatus, device and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of elements is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units. The foregoing is only the embodiments of the present application, and therefore, the patent scope of the application is not limited thereto, and all equivalent structures or equivalent processes using the descriptions of the present application and the accompanying drawings, or direct or indirect application in other related technical fields, are included in the scope of the application.
Claims (10)
1. An attack detection method for an autopilot system, comprising:
respectively acquiring three-dimensional information data and two-dimensional information data of the target area by using a three-dimensional sensor and a two-dimensional sensor;
respectively inputting the three-dimensional information data and the two-dimensional information data into a pre-trained three-dimensional detection model and a pre-trained two-dimensional detection model to obtain a first three-dimensional detection frame and a first two-dimensional detection frame of each target object in the target area;
Inputting the three-dimensional information data and the two-dimensional information data into a pre-trained fusion detection model to obtain a second three-dimensional detection frame of each target object in the target area, and carrying out coordinate system change on the second three-dimensional detection frame to obtain a second two-dimensional detection frame;
calculating first IoU values of a first two-dimensional detection frame and a second two-dimensional detection frame corresponding to each target object to obtain a first array, and calculating second IoU values of the first three-dimensional detection frame and the second three-dimensional detection frame corresponding to each target object to obtain a second array;
and performing attack detection and positioning of the attacked sensor based on the first array and the second array.
2. The method for detecting an attack on an autopilot system according to claim 1, wherein before the three-dimensional information data and the two-dimensional information data are input into a pre-trained fusion detection model, further comprising:
carrying out chi-square comparison on the two-dimensional information data at the previous moment and the two-dimensional information data at the current moment to obtain chi-square comparison values;
judging whether the chi-square comparison value exceeds a preset threshold value or not;
when the chi-square comparison value exceeds a preset threshold, performing histogram matching on the two-dimensional information data at the current moment and the two-dimensional information data at the previous moment to obtain enhanced two-dimensional information data at the current moment, and replacing the two-dimensional information data at the current moment;
And when the chi-square comparison value does not exceed the preset threshold value, maintaining the two-dimensional information data at the current moment.
3. The method for detecting an attack on an autopilot system according to claim 1, wherein said performing a coordinate system change on the second three-dimensional detection frame to obtain a second two-dimensional detection frame includes:
acquiring the coordinates of a three-dimensional center point of the second three-dimensional detection frame;
confirming homogeneous coordinates of four points on the same plane with the three-dimensional center point on the second three-dimensional detection frame by utilizing the three-dimensional center point coordinates;
calculating four two-dimensional coordinate points by using a camera projection matrix, a camera rotation matrix, a rotation matrix from a sensor to a camera coordinate system and the homogeneous coordinates which are acquired in advance;
and constructing the second two-dimensional detection frame by utilizing the four two-dimensional coordinate points.
4. The method for detecting an attack on an autopilot system according to claim 1, wherein the calculating a first IoU value of a first two-dimensional detection frame and a second two-dimensional detection frame corresponding to each target object to obtain a first array includes:
confirming a target first two-dimensional detection frame and a target second two-dimensional detection frame corresponding to each target object;
Respectively calculating a first area and a second area of the target first two-dimensional detection frame and the target second two-dimensional detection frame and a third area of an overlapping area of the target first two-dimensional detection frame and the target second two-dimensional detection frame;
calculating a first IoU value corresponding to each target object by using the first area, the second area and the third area;
and constructing the first array by using the first IoU values corresponding to all the target objects.
5. The method for detecting an attack on an autopilot system according to claim 1, wherein said calculating a first three-dimensional detection box corresponding to each target object and a second IoU value of the second three-dimensional detection box to obtain a second set of values includes:
confirming a target first three-dimensional detection frame and a target second three-dimensional detection frame corresponding to each target object;
respectively calculating a first volume and a second volume of the target first three-dimensional detection frame and the target second three-dimensional detection frame;
calculating the bottom surface area of the overlapping area of the target first three-dimensional detection frame and the target second three-dimensional detection frame;
confirming the height of an overlapping area of the target first three-dimensional detection frame and the target second three-dimensional detection frame;
Calculating a third volume of the overlapping region using the bottom surface area and the height;
calculating a second IoU value corresponding to each target object by using the first volume, the second volume and the third volume;
and constructing the second array by using the second IoU values corresponding to all the target objects.
6. The attack detection method of an autopilot system of claim 1 wherein the attack detection and localization of the attacked sensor based on the first array and the second array comprises:
judging whether the first array and the second array have outliers or not respectively;
when the first array has an outlier and the second array does not have an outlier, confirming that the three-dimensional sensor is attacked;
when the first array has no outlier and the second array has outlier, confirming that the two-dimensional sensor is attacked;
when the first array and the second array both have outliers, confirming that the three-dimensional sensor and/or the two-dimensional sensor are attacked;
and when the first array and the second array do not have outliers, confirming that the three-dimensional sensor and the two-dimensional sensor are not attacked.
7. The attack detection method of an autopilot system of claim 1 wherein the three-dimensional sensor includes a lidar, the two-dimensional sensor includes a camera, the three-dimensional detection model is constructed based on a pointpilar algorithm, the two-dimensional detection model is constructed based on a YOLOv3 algorithm, and the fusion detection model is constructed based on an AVOD algorithm.
8. An attack detection device of an automatic driving system, comprising:
the acquisition module is used for respectively acquiring three-dimensional information data and two-dimensional information data of the target area by using a three-dimensional sensor and a two-dimensional sensor;
the first detection module is used for inputting the three-dimensional information data and the two-dimensional information data into a pre-trained three-dimensional detection model and a pre-trained two-dimensional detection model respectively to obtain a first three-dimensional detection frame and a first two-dimensional detection frame of each target object in the target area;
the second detection module is used for inputting the three-dimensional information data and the two-dimensional information data into a pre-trained fusion detection model to obtain a second three-dimensional detection frame of each target object in the target area, and carrying out coordinate system change on the second three-dimensional detection frame to obtain a second two-dimensional detection frame;
The computing module is used for computing a first IoU value of a first two-dimensional detection frame and a second two-dimensional detection frame corresponding to each target object to obtain a first array, and computing a second IoU value of the first three-dimensional detection frame and the second three-dimensional detection frame corresponding to each target object to obtain a second array;
and the analysis module is used for carrying out attack detection and positioning the attacked sensor based on the first array and the second array.
9. A computer device comprising a processor, a memory coupled to the processor, the memory having stored therein program instructions that, when executed by the processor, cause the processor to perform the steps of the attack detection method of an autopilot system according to any one of claims 1 to 7.
10. A storage medium storing program instructions for implementing the attack detection method of the autopilot system according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310584658.0A CN116912788A (en) | 2023-05-23 | 2023-05-23 | Attack detection method, device and equipment for automatic driving system and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310584658.0A CN116912788A (en) | 2023-05-23 | 2023-05-23 | Attack detection method, device and equipment for automatic driving system and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116912788A true CN116912788A (en) | 2023-10-20 |
Family
ID=88357100
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310584658.0A Pending CN116912788A (en) | 2023-05-23 | 2023-05-23 | Attack detection method, device and equipment for automatic driving system and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116912788A (en) |
-
2023
- 2023-05-23 CN CN202310584658.0A patent/CN116912788A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111563415B (en) | Binocular vision-based three-dimensional target detection system and method | |
CN113819890B (en) | Distance measuring method, distance measuring device, electronic equipment and storage medium | |
CN113865580B (en) | Method and device for constructing map, electronic equipment and computer readable storage medium | |
Berrio et al. | Camera-LIDAR integration: Probabilistic sensor fusion for semantic mapping | |
Fang et al. | On-road vehicle tracking using part-based particle filter | |
US20210358153A1 (en) | Detection methods, detection apparatuses, electronic devices and storage media | |
Siegemund et al. | A temporal filter approach for detection and reconstruction of curbs and road surfaces based on conditional random fields | |
Shin et al. | Real-time and accurate segmentation of 3-D point clouds based on Gaussian process regression | |
Ding et al. | Vehicle pose and shape estimation through multiple monocular vision | |
CN106599108A (en) | Method for constructing multi-mode environmental map in three-dimensional environment | |
CN115049700A (en) | Target detection method and device | |
Vaquero et al. | Dual-branch CNNs for vehicle detection and tracking on LiDAR data | |
CN114898313B (en) | Method, device, equipment and storage medium for generating bird's eye view of driving scene | |
CN112926395A (en) | Target detection method and device, computer equipment and storage medium | |
CN114419098A (en) | Moving target trajectory prediction method and device based on visual transformation | |
CN114325634A (en) | Method for extracting passable area in high-robustness field environment based on laser radar | |
CN114088099A (en) | Semantic relocation method and device based on known map, electronic equipment and medium | |
Parra et al. | Robust visual odometry for vehicle localization in urban environments | |
CN114792416A (en) | Target detection method and device | |
CN112257668A (en) | Main and auxiliary road judging method and device, electronic equipment and storage medium | |
EP4455875A1 (en) | Feature map generation method and apparatus, storage medium, and computer device | |
CN111401190A (en) | Vehicle detection method, device, computer equipment and storage medium | |
CN114463713A (en) | Information detection method and device of vehicle in 3D space and electronic equipment | |
CN111784798B (en) | Map generation method and device, electronic equipment and storage medium | |
CN114648639B (en) | Target vehicle detection method, system and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |