Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Sizing a Renewable-Based Microgrid to Supply an Electric Vehicle Charging Station: A Design and Modelling Approach
Next Article in Special Issue
Sliding Mode Control for Semi-Active Suspension System Based on Enhanced African Vultures Optimization Algorithm
Previous Article in Journal
Experimental Investigation of a Distributed Architecture for EV Chargers Performing Frequency Control
Previous Article in Special Issue
A Multidisciplinary Approach for the Sustainable Technical Design of a Connected, Automated, Shared and Electric Vehicle Fleet for Inner Cities
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Incremental Learning for LiDAR Attack Recognition Framework in Intelligent Driving Using Gaussian Processes

1
Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, 1068 Xueyuan Avenue, Shenzhen University Town, Shenzhen 518055, China
2
Shenzhen Institute of Advanced Technology, University of Chinese Academy of Sciences, Beijing 100049, China
3
Faculty of Computility Microelectronics, Shenzhen University of Advanced Technology (SUAT), Shenzhen 518055, China
*
Author to whom correspondence should be addressed.
World Electr. Veh. J. 2024, 15(8), 362; https://doi.org/10.3390/wevj15080362
Submission received: 16 July 2024 / Revised: 1 August 2024 / Accepted: 6 August 2024 / Published: 12 August 2024
(This article belongs to the Special Issue Design Theory, Method and Control of Intelligent and Safe Vehicles)

Abstract

:
The perception system plays a crucial role by integrating LiDAR and various sensors to perform localization and object detection, which ensures the security of intelligent driving. However, existing research indicates that LiDAR is vulnerable to sensor attacks, which lead to inappropriate driving strategies and need effective attack recognition methods. Previous LiDAR attack recognition methods rely on fixed anomaly thresholds obtained from depth map data distributions in specific scenarios as static anomaly boundaries, which lead to reduced accuracy, increased false alarm rates, and a lack of performance stability. To address these problems, we propose an adaptive LiDAR attack recognition framework capable of adjusting to different driving scenarios. This framework initially models the perception system by integrating the vehicle dynamics model and object tracking algorithms to extract data features, subsequently employing Gaussian Processes for the probabilistic modeling of these features. Finally, the framework employs sparsification computing techniques and a sliding window strategy to continuously update the Gaussian Process model with window data, which achieves incremental learning that generates uncertainty estimates as dynamic anomaly boundaries to recognize attacks. The performance of the proposed framework has been evaluated extensively using the real-world KITTI dataset covering four driving scenarios. Compared to previous methods, our framework achieves a 100% accuracy rate and a 0% false positive rate in the localization system, and an average increase of 3.43% in detection accuracy in the detection system across the four scenarios, which demonstrates superior adaptive capabilities.

1. Introduction

The security of perception systems in intelligent driving is crucial, as it directly impacts passenger safety and the integrity of the surrounding environment [1]. These systems leverage sensors, including GPS, IMU, LiDAR, and cameras, to gather and process environmental data [2]. However, this dependency also makes the perception systems vulnerable to sensor attacks, which can manipulate data and mislead vehicle operations, potentially resulting in significant traffic incidents. LiDAR sensor attacks are characterized by their low technical requirements, significant potential harm, and challenging traceability, making them a substantial threat to the safety of intelligent driving [1]. The task of defending against these attacks is particularly arduous due to the stealthy nature of the manipulations and the ineffectiveness of previous detection methods like anomaly detection [2,3]. Vulnerable sensors, including cameras, GPS, and LiDAR, are prone to various attacks such as deception, interference, and replay. These attacks can cause incorrect assessments of vehicle positioning or failures in accurately perceiving the environment, thereby escalating the risk of accidents [4]. Effective countermeasures are imperative to mitigate these risks and enhance the resilience of intelligent driving perception systems against potential sensor attacks.
LiDAR attack recognition methods are pivotal in the realm of intelligent driving security. These methods typically encompass sensor redundancy, deep learning, and rule-based algorithms. Sensor redundancy enhances environmental sensing by integrating multiple sensors such as cameras, GPS, and LiDAR, thereby creating a more robust detection framework [5]. Deep learning techniques are employed to train on extensive datasets, enabling sophisticated recognition of attacks [6]. Rule-based algorithms utilize predefined sets of rules and models to manage specific scenarios or types of attacks [7]. However, each method has limitations. Sensor redundancy, while increasing robustness, demands high accuracy and consistency from sensors, which can be compromised by noise and direct attacks, leading to escalated costs [2]. Deep learning approaches hinge on large-scale annotated data, which are scarce in the intelligent driving sector. This scarcity can lead to vulnerabilities under sensor attack scenarios due to potential misdirection [6]. Lastly, rule-based algorithms can struggle with consistency across the varied and dynamic conditions of intelligent driving, as they may not encompass all possible scenarios [4]. Addressing these challenges is essential for advancing sensor attack recognition and enhancing the security posture of intelligent driving systems.
Previous methods of attack recognition in intelligent driving are executed in distinct phases, which complicate hardware implementation and exacerbate communication delays. The Gaussian Process (GP) offers a novel approach to overcome these recognition challenges. GP is capable of end-to-end attack recognition, allowing for simultaneous processing without the need for extensive training datasets. This makes it particularly well suited for scenarios with limited sample sizes, thus enhancing its applicability across diverse driving conditions [8]. In instances of sensor attacks, GP can adeptly model potential noise and anomalies, thereby bolstering the system’s resilience against malicious interventions. Moreover, GP facilitates adaptive recognition of data, enabling the system to adjust more effectively to complex and dynamic driving environments [9].
In this paper, we propose incremental learning for the LiDAR sensor attack recognition method that leverages a joint system model and GP. Our approach sets itself apart from conventional data-driven methodologies by integrating system models of intelligent driving with data-driven GP. This integration significantly enhances the capabilities for incremental learning and detection within intelligent driving systems. To ensure the adaptive performance of our proposed framework, we continuously update the GP model using data from the sliding window to accommodate dynamic changes in intelligent driving scenarios. Compared with purely data-driven techniques, attackers targeting our method would need to navigate both the intricacies of the system model and the data-driven components. This dual requirement makes such attacks particularly challenging to execute, positioning our recognition mechanisms among the most formidable in contemporary research.
The main contributions of this paper can be summarized as follows.
  • We have developed an attack recognition framework for LiDAR attacks within intelligent driving perception systems, encompassing both localization and detection systems. In this framework, we model the localization system using a vehicle dynamics model and the detection system using an object tracking algorithm, from which we extract data features. Subsequently, we employ Gaussian Processes to perform probabilistic modeling of these data features, which predict uncertainty estimates to effectively recognize LiDAR attacks.
  • We propose an innovative incremental learning framework for the adaptive recognition of sensor attacks in intelligent driving, capable of adapting to dynamically changing driving environments. Our approach integrates sliding window techniques, sparsification computing, and Gaussian Processes, which allow for updates within the sliding windows to continuously adjust the Gaussian Process possibility model for incremental learning. Compared to previous methods, our framework maintains a 100% accuracy rate and a 0% false positive rate in the localization system and improves the accuracy by an average of 3.43% in the detection system across various driving scenarios.
The structure of this paper is as follows: Section 2 provides an overview of the LiDAR sensor attack and Gaussian Process. Section 3 states the problem in this work. Section 4 introduces the proposed framework. Section 5 conducts simulations that demonstrate the performance of the proposed framework. Finally, Section 6 summarizes the key conclusions and discusses the significance of the findings.

2. Related Works

2.1. LiDAR Sensor Attack

Intelligent driving heavily relies on sensors to perceive their surroundings and make decisions. LiDAR is crucial for precise navigation and obstacle avoidance. However, if LiDAR is compromised, they may input misleading data, leading the vehicle to make erroneous driving decisions [10]. Common attack types include LiDAR replay and spoofing attacks, as shown in Figure 1.
  • LiDAR replay attack: Attackers record LiDAR data in a specific environment and replay it under different circumstances. This type of attack can cause the LiDAR system to misjudge the current environmental state, mistakenly identifying safe areas as obstructed, or vice versa, recognizing hazardous areas as safe. Such attacks pose a direct threat to the safe operation of intelligent driving [3].
  • LiDAR spoofing attack: Attackers send forged signals to the LiDAR system, inducing incorrect environmental perception data. These attacks can lead to navigational errors and may prevent the intelligent driving system from correctly identifying other vehicles, pedestrians, or obstacles on the road, thereby causing severe traffic accidents.
Attack recognition against LiDAR sensor attacks presents multiple challenges. First, attackers may use complex methods such as signal simulation or data tampering, which are difficult to detect through conventional data verification processes [11]. Additionally, the diversity and rapid evolution of attack methods mean that previous recognition mechanisms are often unable to cover all potential attack scenarios. Consequently, developing dynamic and incremental learning recognition systems continues to be a crucial and challenging area of research.

2.2. Gaussian Process

GP is a probabilistic, non-parametric statistical model extensively used for regression and probabilistic classification tasks [8]. These models establish a prior probability distribution over function spaces, ensuring that any finite set of functions exhibits a multivariate Gaussian distribution. In the field of sensor attack detection for intelligent driving, GP is utilized to construct statistical models of sensor data to monitor for anomalies or signs of attacks within the data stream. The model demonstrates significant adaptability when handling complex environmental data and integrating prior knowledge, while the flexible selection of kernel functions allows it to accommodate various data relationships. However, GP faces significant challenges including high computational complexity and difficulties in hyperparameter tuning, which limit its applicability in large-scale data scenarios. Furthermore, the highly dynamic nature of intelligent driving environments necessitates that models swiftly adapt to new data and scenarios. However, GP methodologies still require further optimization to facilitate real-time updates and parameter adjustments.

3. Problem Statement

In this section, we state the problem in recognizing attacks on perception systems, as shown in Figure 2. In intelligent driving applications, perception sensors legitimate input signal z s of the vehicle, and s { g p s , l i d a r , i m u , , c a m e r a } denote various perception sensors. However, the measurements of sensor z s m are corrupted with noise, which can be expressed as follows:
z s m = z s + ϵ ,
where ϵ is the sensor measurement noise, which covers wind noise, road noise and other noise interference in real road conditions. Given the vulnerability of sensors to various types of sensor attacks, measurements can sometimes be compromised, resulting in anomalies. To accurately characterize sensor observations under such conditions, the following adjusted measurement equation is proposed:
z s m z s m + ξ , i f S e n s o r s i s a t t a c k e d z s m , o t h e r w i s e .
where ξ represents the signal effect caused by an attacker, rather than the attack itself. The challenge of attack recognition primarily involves accurately distinguishing whether measurements are merely disturbed by noise or corrupted by an attack. To address this, we propose a two-pronged solution. Firstly, a data feature extraction strategy, denoted by E ( · ) , is essential for isolating the relevant data features e s m from sensor measurements, as illustrated below.
e s m = E ( z s m ) .
Subsequently, it is necessary to devise a recognition strategy, denoted as K ( · ) , which establishes an anomaly decision boundary. This strategy is intended to further determine whether the sensor measurement z s m represents a normal operation or an attacked sensor measurement z s a and recognizes an attacked sensor S a . This approach is detailed in the following section.
S a = s i f e s m K ( e s m )

4. Proposed Framework

Figure 3 depicts the architecture of our proposed framework. Our approach is structured into three key components: feature extraction using a system model, prediction using the Gaussian Process and recognition using uncertainty quantification.

4.1. Feature Extraction Using System Model

In the feature extraction phase, the perception system is modeled to systematically extract relevant data features. Perception systems are mainly divided into localization systems and detection systems. On the side of the localization system, the vehicle dynamics model formulation for the positioning system is as follows:
x y θ k + 1 = x + v ω ( sin ( θ + ω Δ t ) sin ( θ ) ) y + v ω ( cos ( θ ) cos ( θ + ω Δ t ) ) θ + ω Δ t k .
In the described system, the input data features for the localization subsystem in a Gaussian Process (GP) include position coordinates ( x , y ) , velocity v, yaw angle θ , turn rate ω , and time interval Δ t , with k serving as the time index. The position and angular information are derived through the fusion of multiple sensors, including LiDAR, GPS, and IMU.
On the side of the detection system, the model is characterized by the Intersection over Union (IoU) metric between LiDAR and stereo cameras, which supports object detection and tracking. The GP of the detection system utilizes the previous moment’s IoU, I o U t 1 i d , identifier i d , and angle α t 1 i d as inputs. This modeling approach is articulated in the subsequent formulation.
I o U t i d = [ A r e a l i d a r A r e a c a m 1 A r e a l i d a r A r e a c a m 1 , A r e a c a m 1 A r e a c a m 2 A r e a c a m 1 A r e a c a m 2 ] = [ I o U t , 1 i d , I o U t , 2 i d ] ,
In this formulation, i d denotes the object identifier obtained from object tracking, as referenced in [12]. The right-side stereo camera serves as the reference camera. A r e a l i d a r represents the area where the LiDAR 3D object detection box projects onto the image plane of the reference stereo camera, denoted as c a m 1 . The areas A r e a c a m 1 and A r e a c a m 2 correspond to the 2D object detection regions captured by the stereo cameras within the reference coordinate system. I o U t , 1 i d quantifies the overlap between LiDAR and the reference camera’s object detection results for the object identified by i d at time t. Similarly, I o U t , 2 i d measures the IoU for the stereo camera detection of the same object at the same time.

4.2. Prediction Using Gaussian Process

GP predicts the uncertainty quantification as the anomaly boundary. The GP characterizes a collection of random variables, wherein any finite subset follows a joint Gaussian distribution [8]. In the context of state x ^ t R D , action u t R U and system noise w a , the stochastic system dynamics of x are expressed as follows:
x ^ t + 1 = f x t ^ , u t + w a ,
where x ˜ t : = ( x ^ t , u t ) denotes the training input tuples, and y t : = x ^ t + 1 represents the corresponding outputs. Define σ w a 2 as variance, where a is the dimension index with a = 1 , , D . U represents the dimension of action. The system noise is modeled as w a N ( 0 , σ w a 2 ) . The objective is to train a GP to capture the latent function y t i = f i ( x ˜ t ) + w i , and the associated covariance kernel function is expressed as follows:
k i ( x ˜ , x ˜ ) = σ f i 2 exp 1 2 ( x ˜ x ˜ ) A ˜ i 1 ( x ˜ x ˜ ) ,
where σ f i 2 is the variance of f i and Λ i is the diagonal matrix of the length scales in the kernel. Let the input and output sets of samples be denoted as X = [ x ˜ 1 , , x ˜ N ] and Y = [ y 2 , , y N + 1 ] , respectively. The hyperparameters of the GP model θ a = [ σ f i 2 , Λ ˜ i , σ w i 2 ] are learned by maximizing the log marginal likelihood using evidence maximization [8,13].
log p ( Y i | θ i ) = N 2 log ( 2 π ) 1 2 | K i + σ w i 2 I 2 | 1 2 Y i T β i ,
where K i is the matrix with elements calculated in Equation (8), β i = ( K i + σ w i 2 I ) 1 Y i . For any new input x ˜ , the learned GP provides the posterior mean and variance as follows:
m f i ( x ˜ ) = k i T ( K i + σ w i 2 I ) 1 Y i = k i T β i ,
σ f i 2 ( x ˜ ) = k i k i T ( K i + σ w i 2 I ) 1 k i ,
where k i = k i ( X ˜ , x ˜ ) , k i = k a ( x ˜ , x ˜ ) .
To ensure adaptability, this study integrates sparse computation and sliding windows with GP. Sparse Gaussian Processes [14] approximate the GP kernel using inducing points, reducing computational complexity. By partitioning the input space into sub-regions determined by the sliding window width s, each window selects inducing points to approximate the covariance matrix. This approach significantly reduces computational requirements compared to using all data points. Furthermore, integrating sliding windows with GP facilitates incremental learning, enabling adaptation to dynamic datasets. Upon new data arrival, the model updates by adjusting inducing points and the covariance matrix within the current sliding window, avoiding recomputation of the entire dataset’s covariance matrix, as shown in Figure 4. This local update mechanism effectively supports efficient model adaptation. To ensure real-time performance, parallel computation is employed. Tasks are distributed across multiple processors or nodes, allowing concurrent computation of inducing point selection and covariance matrix updates within each sub-region defined by the sliding window. GPU acceleration further enhances computation efficiency by accelerating large-scale matrix operations and optimizations. This integrated approach, combining sliding windows and sparse GP, enhances efficient incremental learning and effectively adapts to intelligent driving datasets. Concurrently, the GP in the detection system outputs the IoU values I o U t i d for each time t and object i d . In the localization system, the GP outputs the position increment Δ x t g and orientation increment Δ θ t g over time.

4.3. Recognition Using Uncertainty Quantification

Attack recognition compares the data features with the uncertainty quantification predicted by the GP to achieve attack recognition. If the measurements fall within the uncertainty quantification C t shown in the following, we conclude that there is no attack.
C t = m f i x ˜ ± z σ f i 2 x ˜ ,
where z is quantile points of the standard normal distribution with a certain confidence level. Conversely, if the measurements are outside the uncertainty quantification, an attack is recognized, completing the detection process. C t of the localization system is specified as the following.
C t = [ C t x , C t y , C t θ ] .
If the displacement measurements ( Δ x t g , Δ y t g ) or ( Δ x t l , Δ y t l ) exceed the corresponding uncertainty bounds C t x and C t y , the affected sensor will be identified as compromised. Similarly, if the orientation measurement of the IMU Δ θ t deviates beyond its specified uncertainty quantification C t θ , the IMU will be designated as the attacked sensor. Moreover, the detection system incorporates multiple objectives, each associated with a specific uncertainty quantification C t i d , defined as follows:
C t i d = [ C t , 1 i d , C t , 2 i d ] .
If I o U t , 1 i d deviates from C t , 1 i d , while I o U t , 2 i d remains within C t , 2 i d , the system infers a potential attack on the LiDAR. Conversely, if I o U t , 1 i d adheres to C t , 1 i d , but I o U t , 2 i d diverges from C t , 2 i d , the inference points to a potential attack on the camera. In situations where both the IoU between the LiDAR and camera detection boxes, and the IoU between detection boxes of the stereo cameras, fail to align within their respective confidence intervals, the system hypothesizes that both the camera and the LiDAR might be vulnerable to attacks.

5. Experimental Results

5.1. Experimental Setup

In this section, we demonstrate the proposed framework on the real-world KiTTI dataset [15] under the LiDAR spoofing attack (LSA) and LiDAR replay attack (LRA). We use four driving scenarios including city, residential, campus, and road environments, as shown in Figure 5. We conduct a thousand experiments each for LRA and LSA attack types, totaling eight thousand experiments. To collaboratively achieve the functionality of the perception system, it is essential to align the data from sensors with different sampling frequencies. Given that the LiDAR has the lowest sampling frequency, set at 10 Hz, the time index is 0.1 s. We set the attacks to occur randomly once with a time index length of 12,000 and an anomaly sampling rate of 0.025%, because multiple attacks can lead to error accumulation and real-world attackers do not engage in sustained attacks on a moving vehicle [2]. We design LRA based on previous experiments [16]. LRA was designed to happen at a randomly selected time index, while the delay of the recorded video was chosen randomly from 3 to 5 second. LSA is simulated based on the experimental results in [11]. We randomly selected two forward beams at an angle of 10° from the origin of the LiDAR coordinate system. We choose a distance between 12 m and generate 120 random pseudo points at this distance between the two selected beams. The height of all pseudo dots does not exceed 1.7 m, which is the typical height of a vehicle.
Based on extensive experimental validation, this study determined optimal parameter settings to enhance model performance. Specifically, a sliding window size of four thousand time indexes was chosen to capture dynamic changes in time-series data for the proposed framework in the localization system. Additionally, 450 inducing points were selected to optimize the computational efficiency and prediction accuracy of the GP model. In addition, sliding window sizes of 3600 and 350 inducing points were chosen for the proposed framework in the detection system. Moreover, a radial basis function was adopted as the kernel function due to its broad adaptability and robust expressive power when handling diverse datasets. We conducted a three-fold cross-validation on a dataset consisting of 12,000 time index data, uniformly dividing them into three subsets for model training and validation purposes. Subsequently, we selected the optimal number of inducing samples, window size, and kernel function based on the best performance metrics. The implementation of these models and computational parameters was accomplished using the GPflow and TensorFlow frameworks [17], both of which support efficient parallel computation.

5.2. Discussion and Analysis

The experimental results, presented in Table 1, demonstrate the effectiveness of our proposed framework for the localization system under LRA. In this particular test, the LRA occurs at time index 203, resulting in a Δ X of −0.54 m. These values indicate that the LiDAR position measurements have been falsified to neighboring positions. Figure 6 presents the experimental results of the proposed framework in the localization system, showing that Δ x 203 l and Δ y 203 l of LiDAR exceed C t x and C t y predicted by the GP. LSA spoofs the LiDAR object detection algorithm into incorrectly detecting the position of the object. Figure 7 shows the experimental results of the proposed framework in the detection system. From Figure 7, we observe that I o U 4 , 1 39 exceeds the uncertainty quantification, while I o U 2 , t does not report any anomaly. Table 2 shows the statistical results of the consistency data for the attacked LiDAR and stereo cameras. The table indicates that the consistency between the LiDAR and the camera changes drastically when the attack occurs, resulting in an I o U 4 , 1 39 value of −3.80, which is outside the uncertainty quantification range of [ 3.56 , 2.50 ] . Notably, the LSA does not affect the consistency between the stereo cameras, so I o U 4 , 2 39 remains within the GP-predicted uncertainty quantification. From these results, we can conclude that the LiDAR is under attack.
To highlight the performance of the proposed framework, we select the optimization-based attack against control systems (OACS) with cumulative sum (CUSUM)-based anomaly detection [2] for evaluating the localization system and the LiDAR and image data fusion for perception attack detection (MDLAD) [1] for assessing the detection system. Figure 8 illustrates the accuracy and false alarm rate variation of the proposed framework, OACS and MDLAD retrained in the KiTTI dataset under sensor attacks. The proposed framework in the localization system consistently maintains a 100% accuracy and 0% false alarm rate in four driving scenarios. In contrast, OACS achieves an average accuracy range of 93.15% in road environments, with a moderate decline of 1.57% in city scenarios. The proposed framework achieves an average accuracy of 95.05% across the four driving scenarios, which represents an improvement of 3.28% compared with MDLAD. It is worth noting that both OACS and MDLAD rely on fixed thresholds based on data distribution and depth maps as decision boundaries for attack recognition. Depth map-based MDLAD suffers from inadequate adaptability primarily due to the complexity and variability of driving scenarios. Different driving scenarios such as city, road, residential areas and campuses exhibit diverse road structures, traffic densities and types of obstacles, which depth maps often fail to capture fully. Furthermore, depth maps, generated by LiDAR or stereo cameras, are susceptible to environmental factors like lighting conditions, weather variations, and occlusions. In contrast, our approach excels in adaptability by dynamically generating decision boundaries based on the current data.
Our proposed framework merges GP with sparse computation and sliding window techniques, which offer substantial benefits for tasks involving incremental learning attack recognition. This mechanism leverages sparse computation methods enhancing the efficiency of GP application on large-scale datasets. The integration of sliding window techniques allows the GP model to be continuously updated, retraining the model with the most recent subset of data to achieve incremental learning and dynamically adapt to data changes. Additionally, GP naturally offers uncertainty quantification in predictions, which enhances the flexibility and robustness of detection by dynamically adjusting recognition based on data distribution. The proposed framework achieved an average false alarm rate of 0% in four driving scenarios, with a 4.23% reduction in false alarm rate compared to OACS. Although the MDLAD based on depth maps effectively handles the nonlinear relationships associated with complex spatial information, depth maps are prone to noise and data incompleteness due to occlusion and poor reflection. The proposed framework achieved an average false alarm rate of 3.03% in the detection system, which is 3.43% lower than that of MDLAD. Therefore, compared to MDLAD and OACS, the proposed framework demonstrates better adaptability across various driving scenarios.

6. Conclusions

In this paper, we integrate the intelligent driving system model with GP to propose an approach for incremental learning-based sensor attack recognition in intelligent driving systems using GP. We first perform data feature extraction of the localization system and the detection system by combining the vehicle dynamics model and the object tracking algorithm, respectively. Further GP predicts the uncertainty quantification of the data feature as the incremental learning detection boundary. Finally, we implement two common sensor attacks to verify the feasibility of the method and compare the experimental results with previous state-of-the-art methods to validate the superior incremental learning performance of the proposed framework. Future research directions include exploring advanced object tracking algorithms and refining vehicle dynamics models to enhance the adaptability of the proposed framework. Additionally, integrating machine learning techniques and addressing specific sensor attack scenarios are key areas for further investigation and improvement of intelligent driving security.

Author Contributions

Z.M.: writing—review and editing, conceptualization, methodology, formal analysis, investigation, software. C.S.: writing—review and editing, formal analysis, investigation, software, funding acquisition. Y.C.: writing—review and editing, formal analysis, software. H.L.: writing—review and editing, funding acquisition. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China with Grant No. 62004212, Shenzhen Basic Research Project with Grant Nos. JCYJ20210324101210027 and JCYJ20220818100814033, the Basic and Applied Basic Research Foundation of Guangdong Province with Grant No. 2022A1515010284, and Guangdong Provincial Key Laboratory of Computility Microelectronics.

Data Availability Statement

The datasets generated and analyzed during the current study are available from the corresponding author on reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
GPGaussian Process
LRALiDAR replay attack
LSALiDAR spoofing attack
OACSOptimization-based attack against control systems
MDLADMulti-modal deep learning for vehicle sensor data abstraction and attack detection
CUSUMCumulative sum

References

  1. Liu, J.; Park, J.M. “Seeing is not always believing”: Detecting perception error attacks against autonomous vehicles. IEEE Trans. Dependable Secur. Comput. 2021, 18, 2209–2223. [Google Scholar] [CrossRef]
  2. Gualandi, G.; Maggio, M.; Papadopoulos, A.V. Optimization-based attack against control systems with CUSUM-based anomaly detection. In Proceedings of the 2022 30th Mediterranean Conference on Control and Automation (MED), Vouliagmeni, Greece, 28 June–1 July 2022; pp. 896–901. [Google Scholar]
  3. Bendiab, G.; Hameurlaine, A.; Germanos, G.; Kolokotronis, N.; Shiaeles, S. Autonomous vehicles security: Challenges and solutions using blockchain and artificial intelligence. IEEE Trans. Intell. Transp. Syst. 2023, 24, 3614–3637. [Google Scholar] [CrossRef]
  4. Kim, K.; Kim, J.S.; Jeong, S.; Park, J.H.; Kim, H.K. Cybersecurity for autonomous vehicles: Review of attacks and defense. Comput. Secur. 2021, 103, 102150. [Google Scholar] [CrossRef]
  5. Shin, J.; Baek, Y.; Eun, Y.; Son, S.H. Intelligent sensor attack detection and recognition for automotive cyber-physical systems. In Proceedings of the 2017 IEEE Symposium Series on Computational Intelligence (SSCI), Honolulu, HI, USA, 27 November–1 December 2017; pp. 1–8. [Google Scholar]
  6. Zhang, J.; Pan, L.; Han, Q.; Chen, C.; Wen, S.; Xiang, Y. Deep learning based attack detection for cyber-physical system cybersecurity: A survey. IEEE/CAA J. Autom. Sin. 2021, 9, 377–391. [Google Scholar] [CrossRef]
  7. Kumar, G.; Kumar, K.; Sachdeva, M. The use of artificial intelligence based techniques for intrusion detection: A review. Artif. Intell. Rev. 2010, 34, 369–387. [Google Scholar] [CrossRef]
  8. Seeger, M. Gaussian process for machine learning. Int. J. Neural Syst. 2004, 14, 69–106. [Google Scholar] [CrossRef] [PubMed]
  9. Keipour, A.; Mousaei, M.; Scherer, S. Automatic real-time anomaly detection for autonomous aerial vehicles. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 5679–5685. [Google Scholar]
  10. Girdhar, M.; Hong, J.; Moore, J. Cybersecurity of autonomous vehicles: A systematic literature review of adversarial attacks and defense models. IEEE Open J. Veh. Technol. 2023, 4, 417–437. [Google Scholar] [CrossRef]
  11. Cao, Y.; Xiao, C.; Cyr, B.; Zhou, Y.; Park, W.; Rampazzi, S.; Chen, Q.A.; Fu, K.; Mao, Z.M. Adversarial sensor attack on lidar-based perception in autonomous driving. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, London, UK, 11–15 November 2019; pp. 2267–2281. [Google Scholar]
  12. Kumar, S.; Sharma, P.; Pal, N. Object tracking and counting in a zone using YOLOv4, DeepSORT and TensorFlow. In Proceedings of the 2021 International Conference on Artificial Intelligence and Smart Systems (ICAIS), Coimbatore, India, 25–27 March 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1017–1022. [Google Scholar]
  13. Cui, Y.; Osaki, S.; Matsubara, T. Reinforcement learning boat autopilot: A sample-efficient and model predictive control based approach. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 2868–2875. [Google Scholar]
  14. Snelson, E.; Ghahramani, Z. Sparse Gaussian processes using pseudo-inputs. In Proceedings of the Advances in Neural Information Processing Systems, Vancouver, BC, Canada, 5–8 December 2005; Volume 18. [Google Scholar]
  15. Geiger, A.; Lenz, P.; Urtasun, R. Are we ready for autonomous driving? The KiTTI vision benchmark suite. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 3354–3361. [Google Scholar]
  16. Hallyburton, R.S.; Liu, Y.; Cao, Y.; Mao, Z.M.; Pajic, M. Security analysis of Camera-LiDAR fusion against Black-Box attacks on autonomous vehicles. In Proceedings of the 31st USENIX Security Symposium (USENIX Security 22), Boston, MA, USA, 10–12 August 2022. [Google Scholar]
  17. Matthews, A.G.d.; Wilk, M.v.; Nickson, T.; Fujii, K.; Boukouvalas, A.; León-Villagrá, P.; Ghahramani, Z.; Hensman, J. GPflow: A Gaussian process library using TensorFlow. J. Mach. Learn. Res. 2017, 18, 1–6. [Google Scholar]
Figure 1. LiDAR replay attack and LiDAR spoofing attack.
Figure 1. LiDAR replay attack and LiDAR spoofing attack.
Wevj 15 00362 g001
Figure 2. Problem statement.
Figure 2. Problem statement.
Wevj 15 00362 g002
Figure 3. Architecture of the proposed framework.
Figure 3. Architecture of the proposed framework.
Wevj 15 00362 g003
Figure 4. The combination of sliding window and sparse Gaussian Process.
Figure 4. The combination of sliding window and sparse Gaussian Process.
Wevj 15 00362 g004
Figure 5. Driving scenarios in intelligence driving.
Figure 5. Driving scenarios in intelligence driving.
Wevj 15 00362 g005
Figure 6. Experimental result of proposed framework in localization system under LiDAR replay attack.
Figure 6. Experimental result of proposed framework in localization system under LiDAR replay attack.
Wevj 15 00362 g006
Figure 7. Experimental result of proposed framework in detection system under LiDAR spoofing attack.
Figure 7. Experimental result of proposed framework in detection system under LiDAR spoofing attack.
Wevj 15 00362 g007
Figure 8. Adaptive analysis of proposed framework.
Figure 8. Adaptive analysis of proposed framework.
Wevj 15 00362 g008
Table 1. Statistical results of proposed framework in localization system under LiDAR replay attack. The bold data represents the attack time.
Table 1. Statistical results of proposed framework in localization system under LiDAR replay attack. The bold data represents the attack time.
Time Index Δ x t g Δ y t g Δ x t l Δ y t l Δ θ t C t x C t y C t θ
2000.270.630.340.30−0.04[0.32, 2.71][−0.21, 2.14][−0.36, 0.45]
2010.290.630.340.31−0.03[0.17, 2.56][−0.31, 2.02][−0.35, 0.46]
2020.330.700.340.36−0.03[0.37, 2.70][−0.18, 2.12][−0.35, 0.46]
2030.310.64−0.540.34−0.01[0.36, 2.70][−0.17, 2.10][−0.34, 0.47]
2040.320.630.370.39−0.01[0.20, 2.55][−0.28, 1.97][−0.34, 0.48]
Table 2. Statistical results of proposed framework in detection system under LiDAR spoofing attack. The bold data represents the attack time.
Table 2. Statistical results of proposed framework in detection system under LiDAR spoofing attack. The bold data represents the attack time.
Time Index id IoU t , 1 id IoU t , 2 id C t , 1 id C t , 2 id Result
439−3.80−0.95[−3.56, 2.50][−4.34, 4.25]LiDAR attack
280.16−1.43[−3.26, 4.28][−4.36, 4.56]
550.720.26[−4.49, 4.77][−4.22, 3.06]
43−0.430.42[−2.94, 4.21][−4.24, 4.16]
400.460.20[−4.48, 4.07][−4.63, 3.83]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Miao, Z.; Shao, C.; Li, H.; Cui, Y. Incremental Learning for LiDAR Attack Recognition Framework in Intelligent Driving Using Gaussian Processes. World Electr. Veh. J. 2024, 15, 362. https://doi.org/10.3390/wevj15080362

AMA Style

Miao Z, Shao C, Li H, Cui Y. Incremental Learning for LiDAR Attack Recognition Framework in Intelligent Driving Using Gaussian Processes. World Electric Vehicle Journal. 2024; 15(8):362. https://doi.org/10.3390/wevj15080362

Chicago/Turabian Style

Miao, Zujia, Cuiping Shao, Huiyun Li, and Yunduan Cui. 2024. "Incremental Learning for LiDAR Attack Recognition Framework in Intelligent Driving Using Gaussian Processes" World Electric Vehicle Journal 15, no. 8: 362. https://doi.org/10.3390/wevj15080362

APA Style

Miao, Z., Shao, C., Li, H., & Cui, Y. (2024). Incremental Learning for LiDAR Attack Recognition Framework in Intelligent Driving Using Gaussian Processes. World Electric Vehicle Journal, 15(8), 362. https://doi.org/10.3390/wevj15080362

Article Metrics

Back to TopTop