Abstract
Vehicle drivers driving cars under the situation of drowsiness can cause serious traffic accidents. In this paper, a vehicle driver drowsiness detection method using wearable electroencephalographic (EEG) based on convolution neural network (CNN) is proposed. The presented method consists of three parts: data collection using wearable EEG, vehicle driver drowsiness detection and the early warning strategy. Firstly, a wearable brain computer interface (BCI) is used to monitor and collect the EEG signals in the simulation environment of drowsy driving and awake driving. Secondly, the neural networks with Inception module and modified AlexNet module are trained to classify the EEG signals. Finally, the early warning strategy module will function and it will sound an alarm if the vehicle driver is judged as drowsy. The method was tested on driving EEG data from simulated drowsy driving. The results show that using neural network with Inception module reached 95.59% classification accuracy based on one second time window samples and using modified AlexNet module reached 94.68%. The simulation and test results demonstrate the feasibility of the proposed drowsiness detection method for vehicle driving safety.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
With the improvement of people’s living standards, more and more families have bought vehicles. Vehicles play an influential role in transportation because of their flexible transportation capabilities and also promoted the development of many industries and accelerated the economic improvement. According to the public statistics, China had 261.5 million vehicles by the end of 2019 [1]. The increase in the number of vehicles will also increase the number of traffic accidents, especially those caused by the drowsy driving of vehicle drivers. In long-distance driving, vehicle drivers will drive for a long time in order to improve driving efficiency. Prolonged driving of vehicle will make the driver feel tired and distracted, which may lead to mortal traffic accidents. On the other hand, vehicle drivers who obtain insufficient sleep on most nights can also cause drowsy driving. The American Automobile Association (AAA) estimates that one-sixth (16.5%) of fatal traffic accidents and one-eighth (12.5%) of accidents requiring driver or passengers hospitalized are caused by drowsy driving [2]. According to the US National Highway Traffic Safety Administration (NHTSA) statistical analysis, 91,000 police-reported crashes involved drowsy drivers in 2017. These crashes led to an estimated 50,000 people injured and nearly 800 deaths [3]. The German Road Safety Council claims that 25% of the highway traffic fatalities are caused by driver drowsiness [4]. A report has shown that vehicle driver drowsy driving can happen any time, but most frequently occur between midnight and 6 a.m., or in the late afternoon [3]. The above statistics show that the harmfulness of drowsy driving is very huge, which is one of the causes of major traffic accidents.
The successful detection of drowsiness is a crucial step to reduce the cost to society of traffic accident. Until now, there are many published researches have tried to solve the problem of driver drowsiness detection [5,6,7]. Among many researches, driver drowsiness detection methods based on computer vision occupies the vast majority. Computer vision based measurements mainly detect the driver’s eye motion, eye blinking, head motion and head position [8, 9]. However, with the spread of COVID-19, it has become normal for drivers to wear masks, which is a challenge for drowsiness detection. On the other hand, in the case of drowsy driving, the light is generally not very good, which also has a great influence on the detection accuracy rate. Many studies have shown that among the many indicators of drowsiness detection, electroencephalographic (EEG) signals are known as the gold standard for drowsiness detection. Gharagozlou et al. [10] analyzed the EEG signals of sleep-deprived drivers while performing simulated driving tasks, and found that \(\alpha\) wave (8–13 Hz) can be used as an indicator to detect driver drowsiness. Lin et al. [11] use the offset distance of the correct lane line in the simulation device as the evaluation index and found that the correlation coefficient between \(\alpha\) wave and drowsiness is the largest. In summary, EEG signals are closely related to human drowsiness, especially the \(\alpha\) wave. So in this paper, we mainly detect and analyze the \(\alpha\) band of EEG signals under drowsy driving. Reasonable analysis and processing of EEG signals can effectively predict the drowsiness of drivers.
In order to detect the driver drowsiness state in time and accurately and reduce traffic accidents caused by drowsy driving, in this paper, we propose a vehicle driver drowsiness detection method using wearable electroencephalographic based on convolution neural network (CNN). The system architecture of the vehicle driver drowsiness detection method is shown in Fig. 1, which consists of three portions, namely the data collection using wearable EEG, vehicle driver drowsiness detection module and the early warning strategy. Firstly, a wearable brain computer interface (BCI) is used to monitor and collect the EEG signals. We collect the EEG signals in two different conditions: one is sleep deprivation and the test time is between 3 a.m. and 5 a.m., and the other is having a normal night’s sleep and the test time is between 10 a.m. and 12 a.m. Secondly, the collected EEG signals need to be pre-processed. We use linear filter, fast independent components analysis (FastICA) and wavelet threshold denoising to get high-quality EEG signals. Then, the convolution neural networks with Inception module and modified AlexNet module are trained to classify the EEG signals. Finally, the early warning strategy module will function and it will sound an alarm if the vehicle driver is judged as drowsy. The feasibility of the proposed drowsiness detecting method for vehicle driving safety is demonstrated by the simulation and test results.
The contributions can be summarized as follows: (1) a vehicle driver drowsiness detection method using wearable EEG is proposed to alert and warn vehicle drivers under drowsiness conditions. (2) The method uses the neural networks with Inception module and modified AlexNet module to extract the feature of the EEG signals and then train and classify the EEG signals. (3) The early warning device is used to warn the status of vehicle drivers. If the vehicle driver is normal, the early warning device will show a white light. If the vehicle driver is judged as drowsy, the early warning will show a red light and sound an alarm.
The rest of this paper organized as follows. Section 2 provides a review of the related work. Then, the proposed methodology is described in Sect. 3. Section 4 analyzes the simulation and test results. Finally, some conclusions are provided in Sect. 5.
2 Related work
The driver plays an important role in the driving of the vehicle, so the driver drowsiness detection and early warning can effectively reduce traffic accidents. According to the different equipment and method used, drowsiness detection can be divided into three different measures.
The first method is based on measuring vehicle behaviors to evaluate the driver state. This measure mainly detects the analysis of vehicle movement, like steering wheel movement, acceleration pedal movement, lane keeping and braking, etc., to determine the state of driver alertness [12, 13]. Mortazavi et al. [14] found that when the driver is in a state of drowsiness, the reaction speed is reduced because the brain is not awake, which is manifested by a weakened steering wheel control ability. Forsman et al. [15] developed a driver drowsiness detection method at moderate levels of fatigue; this method could provide the driver with sufficient time to reach a rest stop.
The second method is based on the visual features, which mainly analyzes the driver’s eyes state, mouth state, expression, head position and head motion through the camera [16, 17]. The driver’s eye blinking frequency and eye-closing time in the drowsiness state are different from the normal state, so many drowsiness detection methods are mainly eye detections [18]. Head pose is also an important feature for judging whether the driver is drowsy. When the drivers are drowsy, they may lower their head or lean to the side [19]. For more robust and reliable driver inattention monitoring systems, some researchers combined more facial expressions to detect drowsiness. Mbouna et al. [20] presented a method to analyze both eye state and head pose for continuous monitoring the alertness of a vehicle driver.
The third detection method is based on physiological signals, such as electrocardiogram (ECG), electrooculogram (EoG), electromyogram (EMG), and electroencephalogram (EEG) [21]. Khushaba et al. [22] extracted the related information from ECG, EEG and EOG, and used fuzzy wavelet-packet-based feature extraction algorithm to classify the drowsiness state. Many studies have shown that among many indicators of drowsiness detection, EEG signal based method is the most promising and feasible method for drowsiness detection [23,24,25]. Lin et al. [26] proposed a novel brain–computer interface system that can acquire and analyze EEG signals in real time to monitor and warn the drowsiness of the drivers, and the system obtained an average sensitivity of 88.7% and positive predictive value of 76.9%. Chai et al. [27] presented an EEG-based driver drowsiness classification method using sparse-deep belief networks and autoregressive modeling, which achieved a classification accuracy of 90.6%. Yeo et al. [28] used support vector machines (SVM) in identifying and differentiating EEG changes that occur between alert and drowsy states, and obtained an accuracy over 90%. Gu et al. [29] surveyed the recent literature on EEG-based intelligent BCI technologies and introduced driving fatigue detection research using deep learning algorithms. Gao et al. [30] proposed an EEG-based spatial temporal convolutional neural network (ESTCNN) for driver drowsiness detection and achieved a high accuracy of 97.37%. Zeng et al. [31] developed two mental state classification models called EEG-Conv and EEG-Conv-R for driver drowsiness detection and obtained 91.788% and 92.682% classification accuracy.
From the above literature, it can be seen that EEG signals are widely used in driver drowsiness detection. However, researchers also found that EEG signals are very weak and susceptible to the background noise. Therefore, how to extract high-quality EEG signals under drowsy driving and how to accurately classify the EEG signals require further researches.
3 Methodology
3.1 EEG signal acquisition
Acquisition of EEG signals is the first step. We decide to use a programmable collecting scheme of OpenBCI open source. The Ag–CL electrode is shown in Fig. 2a, which has higher applicability than the medical wet electrode. As shown in Fig. 2b, the OpenBCI demo board collects EEG signals, converts potential signals into digital signals through digital-to-analog conversion circuits and transmits them to the personal computer. The EEG cap consists of 8 dry electrodes with ultra-high impedance amplifiers and 2 ear clips as reference electrodes. The EEG acquisition module has a sampling rate of 256 Hz and an operating voltage of 6 V, which is shown in Fig. 2c.
There are many forms of EEG signal acquisition, and the location of the collection electrode is also different. In order to better analyze the EEG signal of different regions, the international 10–20 system of electrode placement has been formulated to standardize EEG signal collection, which is shown in Fig. 2d. According to some research [26, 32,33,34], the EEG signals of prefrontal lobe (Fp1, Fp2), central lobe (C3, C4), temporal lobe (T7, T8) and occipital lobe (O1, O2) are related to the drowsy state, so electrodes are placed at these points in this paper.
3.2 EEG signal pre-processing
As a weak physiological signal, EEG signal is affected by various aspects in the process of acquisition, resulting in low quality. One of challenges of using EEG-based systems is the contamination from EEG artifacts, including muscle noise, eye activity, blink artifacts, and instrumental noises such as line noise and electronic interference. So it is necessary to perform certain pre-processing operations on the collected EEG signals to remove these artifacts and improve the information quality. In this paper, we use linear filters, FastICA method and wavelet threshold denoising to remove these artifacts. The general pre-processing process of EEG in this paper is shown in Fig. 3.
The sampling frequency of the acquisition device in this paper is 256 Hz, and the maximum frequency of the collected EEG signal is 128 Hz, which is much larger than the frequency of drowsy EEG signals, so linear filters are used for filter first. Because the frequency response curve of the Butterworth filter in the passband is the flattest and can effectively retain the useful components of the signal, a third-order Butterworth bandpass filter is selected to initially filter the signal, and the cut off frequency is set to 1 Hz to 60 Hz, preliminary filter of unwanted low-frequency and high-frequency components. Next, we design a Butterworth trap filter with a cut off frequency of 50 Hz to remove power frequency interference.
Second, eight-channel EEG signals are implemented with FastICA. Independent components analysis (ICA) is a common classical algorithm for blind source separation, which assumes that the original signal is statistically independent and the observed signal is formed by instantaneous mixing of the original signal. The observed signals are separated according to some prior knowledge to obtain independent original signal components. Independent components analysis has been wildly used in EEG artifacts removal [11, 43]. The specific process of FastICA algorithm used in this article is as follows:
-
Step 1 Centralize and whiten the observed data;
-
Step 2 Initialize the separation matrix \(W\), convergence error \(\varepsilon\), number of iterations \(p\);
-
Step 3 Update
$$ W_{n + 1} = E\left\{ {Xg^{\prime}\left( {W_{n}^{T} Z} \right)} \right\} - E\left\{ {g^{\prime\prime}\left( {W_{n}^{T} X} \right)} \right\}W $$(1) -
Step 4 Standardize
$$ W_{n + 1} = W_{n + 1} /\left\| {W_{n + 1} } \right\| $$(2) -
Step 5 Judge whether \(\left\| {W_{n + 1} - W_{n} } \right\| < \varepsilon\) is established, if it is established or the number of iterations reaches \(p\), it ends. Otherwise, return to Step 3.
\(E\) is the mean operation and \(g\) is the non-quadratic function that generally is shown as follows:
In the formula, \(1 \le a_{1} \le 2\), usually \(a_{1} \,{ = }\,1\).
The signal separated by FastICA is only a relatively independent component, which also contains some noise and not accurate enough. Therefore, after signal separation, we use wavelet threshold method to decompose and reconstruct the signal of each channel. Wavelet threshold method is based on discrete wavelet transforms (DWT). First, performing wavelet decomposition on the original signal to obtain scale coefficients. Three-layer wavelet decomposition is used in this paper. Then, threshold processing is used. We use a soft-threshold processing method [35]. The specific method of soft threshold is to discard this item when the absolute value of the wavelet coefficient is less than the given threshold. When the wavelet coefficient is greater than the given threshold, the coefficient is set to the difference between the original value and the threshold. Finally, the inverse wavelet transform is used to reconstruct the signal to achieve the purpose of reducing noise. The structure of three-layer wavelet decomposition is shown in Fig. 4. We directly remove the high-frequency component D1 and reconstruct the decomposition coefficients of D2, D3, A3 and obtain a reliable EEG signal.
3.3 Classification of EEG signals based on CNN
Previously, the mainstream solution for classifying EEG signals was machine learning methods. In recent years, with the development of deep learning, convolutional neural networks have also been applied to EEG signals classification because of their excellent performance in computer vision and natural language processing. Hajinoroozi et al. [36] used convolutional neural networks for drowsiness detection of EEG signals and achieved good results. In this paper, the neural networks with Inception module and modified AlexNet module are proposed to classify the EEG signals.
3.3.1 Convolutional neural networks with Inception module
In terms of convolutional networks, Inception and residual network (ResNet) have the best performance and are the most popular. The purpose of ResNet is to solve the problem of gradient explosion or gradient disappearance. However, the size of the dataset trained in this paper is small, and it is better to capture features rather than increase network depth, so the network structure with Inception unit is selected. The Inception unit was first proposed by Szegedy et al. [37] in 2015. The structure of Inception increases the width of the neural network and replaces large convolution kernels with several parallel small convolution kernels to perform operations. While increasing the running speed, it can connect different outputs together and adaptively select the required information through the weight of the next layer of network.
In addition to the Inception unit, we also use the batch normalization (BN) layer in the construction of the network model. Ioffe et al. [38] proposed the BN layer which is a training optimization method. The essence of the BN layer is to adjust the distribution. When the distribution of training data and test data is different, the generalization ability of the network will be affected. The BN layer is to normalize each training mini-batch and finally restore it to an approximate original distribution.
The specific operation is as follows:
Suppose that the input of a batch in a layer of neural network is \(X{\text{ = [x}}_{1} {\text{,x}}_{2} {,}...{\text{x}}_{n} {]}\), and set two learning parameters \(\gamma\) and \(\beta\). First find the mean and variance of the elements in this mini-batch:
Then normalize each sample element:
Finally, scaling and deviation are performed to approximate the original distribution, and the output is:
In the BN layer, the performance of the network can be optimized by learning the two parameters \(\gamma\) and \(\beta\). The network with the BN layer has a faster convergence speed and can effectively prevent the gradient dispersion problem and enhance the robustness of the network. Therefore, in the network construction of this paper, the BN layer is used to improve the capabilities of the model.
We are inspired from the Inception network structure and propose our own network structure, which is shown in Fig. 5. The model in this paper consists of five convolutional layers, two pooling layers, three Inception modules, and three fully connected layers. All paddings use the same type.
3.3.2 Modified AlexNet model
Because the EEG signals are relatively weak, and the acquisition and pre-processing are difficult, the dataset size is relatively small. In order to judge whether the adopted Inception model is reasonable, a modified AlexNet model is also used for comparative analysis.
AlexNet model was proposed by Krizhevsky et al. [39]. The AlexNet model is divided into 8 layers, 5 convolutional layers, and 3 fully connected layers. Generally, the activation function of neurons will choose tanh function or sigmoid function. To speed up training, AlexNet model used rectified linear unit (ReLU) in each convolutional layer.
Because the value range obtained by the ReLU activation function does not have an interval, the results obtained by ReLU should be normalized, which is local response normalization (LRN). The LRN is as follows:
where \(a_{x,y}^{i}\) is the activity of a neuron computed by applying kernel i at position (x, y), N is the total number of kernels in the layer, n is ‘adjacent’ kernel maps at the same spatial position, \(k = 2,\alpha = 10^{ - 4} ,\beta = 0.75\).
AlexNet model also uses the overlapping pooling method and the experiments show that using pooling with overlap is better than the traditional.
In this paper, a similar structure is used. Compared with the original AlexNet network, we reduce the size of the convolution kernel of the convolution layer to extract more detailed features, and the output nodes of the convolution layer are also reduced. In addition, the LRN layer in the network structure is deleted and changes to the BN layer. The position of the BN layer is after the convolution layer and before the activation layer, and the Group convolution operation is deleted at the same time, so that the training can be performed on one GPU. On the other hand, a convolutional layer has been added to increase the depth of the network to increase its capabilities of representation. Finally, because the BN layer is added, the dropout parameter setting at the fully connected layer is small. The network model built in this paper includes 8 convolutional layers, 4 pooling layers, and 3 fully connected layers. The padding of the first three convolutional layers selects the valid mode, and the padding of the subsequent layers selects the same mode. The modified AlexNet model structure is shown in Fig. 6.
3.4 Early warning strategy module
Reasonable strategies can effectively remind vehicle drivers to restore their attention. In this paper, we propose a simple and feasible early warning strategy for EEG signals, which can prompt the driver when the driver drowsiness is detected, so that the driver can restore attention as soon as possible. The early warning strategy process is shown below.
-
(1)
When the vehicle driver is driving normally, the EEG signal detection system does not detect an abnormal state, and the system indicator light is white.
-
(2)
When the vehicle driver is determined to be drowsy for 3 s, the driver is deemed to be in a drowsy state. The indicator light will be red to remind the driver to restore his attention. At this time, the driver is judged as a first-level drowsiness state.
-
(3)
When the red light continues to turn on for more than 5 s, if the vehicle driver’s EEG signal is still judged to be drowsy at this time, it can be determined that the driver is already at a high level of drowsiness at this time, that is, the second-level drowsiness state. The buzzer sounds to alert the driver.
-
(4)
If the red light turns on less than 5 s and the EEG signal returns to the normal state, it means that the vehicle driver has recovered his attention, the indicator light returns to white.
4 Experimental results
The goal of this section is to experiment and demonstrate scientifically the capability of the vehicle driver drowsiness detection method using wearable EEG based on convolution neural network.
4.1 EEG signals acquisition program
Twenty subjects (18 males and 2 females) were selected for this collection experiment. They ranged in age from 22 to 42 years old and were in good health and had no history of mental illness. Before the experiment, all subjects have been informed of all experimental purposes and specific operating procedures and signed a written consent form.
In order to avoid the influence of blood glucose level in human body on EEG, data collection experiments were carried out at least one hour after meals. All sedatives and sleeping drugs were stopped three days before the experiment. And in order to reduce scalp resistance, the hair was washed the day before the experiment. The sampling frequency of the acquisition device is 256 Hz, and the subjects will collect awake and drowsy EEG data in the following two time periods after completing the above acquisition preparation conditions: One is that the subjects had effective sleep for 8 h before the awake data collection experiment, eating breakfast at 8 a.m., and maintaining emotional stability, collecting awake EEG data from10 a.m. and 12 a.m. The other is that the subjects staying up late before collecting drowsy data. Collecting drowsy EEG data between 3 a.m. and 5 a.m. after one day. The time interval between the collection of awake EEG data and drowsy data for each subject was one week.
The data acquisition experiment of EEG signals is shown in Fig. 7. We collect the awake EEG signals, while the subject is driving a vehicle on the road in school campus without people during 10 a.m. and 12 a.m., which can be seen in Fig. 7a, b. Considering the danger of collecting drowsiness driving data in really driving environment, we collect drowsiness driving data in a stationary laboratory location during 3 a.m. and 5 a.m., which are shown in in Fig. 7c, d.
The experiment used fatigue warning system MR688 [40] to assist in verifying the true state of the subjects when collecting EEG signals.
4.2 Experimental setup
The data used in this article is collected by OpenBCI and imported into the data stream in real time through MATLAB. The pre-processing part which includes linear filter, FastICA, and wavelet threshold are all completed on MATLAB, and the dataset in the format of ‘mat’ is obtained.
The hardware environment of this experiment is as follows: CPU Intel I5-6300HQ, frequency 2.6 GHz, GPU is NVIDIA GeForce 960 M, video memory is 4G, running memory is 16G DDR4 2133 MHz.
The software environment includes Python 3.6, and Anaconda is selected for package management. The Tensorflow version used is the Tensorflow 1.7 GPU version. The NVIDIA computing platform CUDA version 7.0 is adopted, and the supporting CuDNN 7.0 is adopted to accelerate the calculation.
4.3 Dataset description
The data samples used in this paper include drowsy state and awake state. The acquisition equipment is an eight-channels EEG signal acquisition device with a sampling frequency of 256 Hz. This paper selects the EEG signal for 1 s as a training sample and collects the EEG signal for 1 h under the condition of awake and drowsiness for each of the 20 subjects. In the end, we obtain a total of 69,054 samples, of which 33,035 were awake period samples and 36,019 were drowsiness period samples. Figure 8 shows the EEG of the selected O1 channel in the awake state and drowsy state. It can be seen from the figure that the EEG signal is sparser in the drowsy state and presents a certain waveform, which is caused by the increase of \(\alpha\) wave activity in the drowsy state of the human body. The collected signals are first filtered by linear filter, then FastICA is used for signal separation, and then wavelet threshold is used to denoise. Then perform matrix stitching to obtain an EEG dataset with dimensions of 69,054 × 256 × 8. According to the different states of the EEG signals, a 69,054 × 2 corresponding label dataset is established. The labels use one-hot coding, and ‘10’ and ‘01’ represent awake and drowsy, respectively. Finally, the dataset is randomly shuffled and divided into two parts: 50,000 × 256 × 8 and 19,054 × 256 × 8, which are used as the training set and test set, respectively.
4.4 EEG signal pre-processing results
The EEG data obtained by the preliminary collection are not completely valid EEG data, which also contains a lot of noise, which affects the subsequent experimental results. The EEG data collected in the acquisition experiment need to be filtered to remove the noise part that does not coincide with the EEG spectrum. We select a third-order Butterworth bandpass filter to initially filter the signal, and the results of each channel component of EEG signals after linear filter are shown in Fig. 9a. It can be seen from the figure that the EEG signal after linear filter has EEG signal characteristics, which has a nonlinear trend and has analytical value. Since each channel of the EEG signal affects each other, in order to purify the EEG signal of each channel, Fast ICA is used for separation. After the separation, the signal waveforms of the8 EEG channels are shown in Fig. 9b. As can be seen from the figure, after Fast ICA processing, the quality of EEG signals has been significantly improved. The signals processed by Fast ICA contain some noise and the accuracy is not enough, so we use the wavelet threshold method to decompose and reconstruct the signal of each channel and extract the signal of the frequency band we need. We use three-layer wavelet decomposition to reconstruct the EEG signal, and the results are shown in Fig. 9c. The output y in the figure is the reliable EEG signal we need after reconstruction, and its band is 0–64 Hz.
4.5 EEG signals classification results based on CNN with Inception module
Considering the limited hardware resources, the training process employs batch processing, which requires less memory and has faster calculations. The extreme small batch size will cause the loss curve to violently oscillate. Therefore, the batch size is set to 64 and the learning rate is 0.0003, so that the smaller learning rate can be more accurate to find the optimal point and be better for the classification problem. Dropout is added to the full connection layer to reduce the calculation parameters and prevent overfitting. Finally, because the Inception model is more complex, we also add the L2 regularity after the loss function to further prevent overfitting and randomly extract 15% of the data from the training set as a validation set to determine the network training status.
Figure 10a is the change of training loss of the network structure with Inception module. By the loss function of the training image, it can be roughly judged whether the learning rate is reasonable. As seen from the figure, the loss of the network initially drops rapidly to a lower value due to the batch normalization layer. In the subsequent iterations, the training loss is constantly fluctuating since the mini-batch is smaller than the entire training sample, but the overall trend is declining. Figure 10b shows the change of the validation set loss during the training process. As the iterations increase, the loss of the validation set also continuously decreases and eventually stabilizes. It can be seen from the images that the overfitting phenomenon has not occurred because the model parameters were selected properly.
The performance of the model is evaluated next. Model evaluation is mainly carried out in terms of accuracy, precision, recall, F1 score and area under curve (AUC). Assuming that the driver's original state is awake and is judged to be awake by the network, such samples are true positives (TP). The awake state is judged as drowsy is false negatives (FN). The drowsy state is judged as awake is false positives (FP). The drowsy state is judged as drowsy is false negatives (FN). The formulas of accuracy, precision, recall and F1 score are as follows:
The results of model evaluation are shown in Fig. 11. It can be seen that as the iteration progresses, these indicators continue to rise, which also shows that the capabilities of the model continue to rise. Finally, when the network is close to convergence, the indices tend to be stable.
The robustness of a classifier is mainly measured through the receiver operating characteristic curve (ROC). The abscissa and ordinate of ROC are false positive rate (FPR) and true positive rate (TPR), respectively. The calculation formulas are:
The ROC curve can reflect the classification effect of a classifier. However, when the curve crosses, the ROC characterization method is not intuitive enough to quickly judge the quality of the classifier. Therefore, the area under the curve is used to intuitively reflect the classification ability expressed by the ROC curve, which is AUC. Figure 12 shows the variation of AUC with the number of iterations during the training process. It can be seen from the figure that the AUC curve rises steadily and finally stabilizes at about 0.95 when the iteration is performed approximately 20,000 times.
After training, the accuracy of the training set of the model on the last mini-batch is 96.87%. The accuracy of the validation set is 95.33%, the precision is 95.57%, the recall is 95.60%, the F1 score is 95.48%, and the AUC is 0.9553. By introducing the test set size of 19,054 × 256 × 8 into the completed model, a final model with an accuracy of 95.59% and a recall of 96.12% is designed.
After the training is completed, the network structure used in this paper is visualized to visually observe the extracted features. This paper selects the input of the first Inception module and the output of the three Inception modules for visualization. For the convenience of visualization, the output of the first 16 filters is selected for visualization in each layer, and the results are shown in Fig. 13. From the feature extraction map, it can be seen that in the shallow network, the sequence length of the data is relatively long, and it can represent more information and can contain more features. As the number of network layers deepens, the length of the sequence extracted by the convolution kernel becomes shorter, and the fewer features extracted by each filter, but the more representative it is.
We visualize the relationship between driver drowsiness and brain position using network structure with Inception module. The visualization result is shown in Fig. 14. The horizontal axis of the image represents the sequence value. The vertical axis is the channel, which from top to bottom are Fp1, Fp2, C3, C4, T7, T8, O1 and O2. Figure 14a shows the beginning of the network training, and its weights are randomly generated, which looks chaotic. As the training progresses, the weight distribution gradually changes. After 20,000 iterations of the network, it can be seen from Fig. 14b that the basic white dots are concentrated in the bottom two rows, which means the EEG signals of O1 and O2 channels are most closely related to the drowsy state.
4.6 EEG signals classification results based on modified AlexNet model
The parameters of the modified AlexNet network model built in this paper are as follows: batch size is 64, and the learning rate is set to 0.0003. At the same time, because the network model is simpler than the Inception structure, in order to prevent insufficient representation capabilities, the dropout ratio is set to 0.5, that is, neurons are retained with a probability of 0.5 in the fully connected layer, which is greater than 0.2 of the Inception structure, and remove the regularization term in the cost function. The change of the training loss and verification loss of the network model with the number of iterations is shown in Fig. 15. The network converged when the iteration was performed approximately 16,000 times. During this period, the training loss continued to decline and eventually stabilized, and the overall loss of the validation set also continued to decline.
We evaluate the indicators of the model on the validation set. The accuracy, precision, recall, and F1 score on the validation set can be seen in Fig. 16, respectively. As the iteration progresses, all indicators maintain an upward trend, which proves that as the training progresses, the model continues to improve without overfitting. AUC change trend of the model is shown in Fig. 17. After training, the accuracy of the training set of the model on the last mini-batch is 98.43%. The accuracy of the validation set is 94.76%, the precision is 94.87%, the recall is 95.28%, the F1 score is 94.96%, and the AUC is 0.9504. The accuracy and recall of the final model on the test set are 94.68% and 95.32%
Next, select test data samples for visual management. In the established modified AlexNet network, the second, fourth, sixth, and eighth layers of convolutional networks are visualized. Due to the long length of the shallow sequence, for the convenience of visualization, the first 8 filters are selected for the second layer, and the remaining 16 filters are selected for the rest. The test data sample visualization results are shown in Fig. 18. The modified AlexNet network is similar to the previous Inception network, and the features extracted in the shallow layer are similar to the original waveform. With the deepening of the number of layers, the extracted features are more and more advanced, representing the features that are actually applied by network recognition.
4.7 Comparison
By comparing the two models, the following conclusions can be drawn. As the number of iterations increases, both models eventually converge, and the accuracy on the test set of the network structure with Inception module is 95.59%, while the modified AlexNet module is 94.68%. The classification accuracy of the network structure with the Inception model is slightly higher than the modified AlexNet model, in addition to the validation set accuracy, precision, recall, F1 score and AUC, the network model with the Inception structure is also slightly higher, but the difference is not large and the two types of models have close capabilities. From the perspective of training time, the training of the network structure with the Inception model takes 1 h and 16 s, while the modified AlexNet network model takes only 39 min. The modified AlexNet model has fewer parameters, so it trains faster and converges faster.
On the other hand, we compared the proposed method with the other state of the art methods. Lin et al. [26] proposed a one channel BCI system using Mahalanobis distance (MD) to detect the drowsiness in real time. Zhang et al. [41] used a support vector machine (SVM) classification algorithm and the fast Fourier transform (FFT) to determine the vigilance level. Li et al. [42] proposed a smartwatch-based wearable EEG system using support vector machine-based posterior probabilistic model (SVMPPM) for driver drowsiness detection. Punsawad et al. [32] developed a single-channel EEG-based device for real time drowsiness detection. Chai et al. [43] presented a two-class EEG-based classification using Bayesian neural network for classifying of driver fatigue. Wali et al. [44] used discrete wavelet packet transformation (DWPT) and fast Fourier transformation (FFT) to classify the driver drowsiness level. The comparison results are shown in Table 1.
It can be seen from the data in Table 1 that the method proposed by Lin et al. [26] obtained an accuracy of 82.8% and a recall of 88.7%. The accuracy of the method proposed by Li et al. [42] is 88.6% when time window is 1 min. The method proposed by Wali et al. [44] obtained an accuracy of 79.21% and a recall of 82.09%. Zhang et al. [41] showed the effect of time window on accuracy. When the time window is set to 1 s, the average classification accuracy of the O1 EEG signal is 83.28%. As the time window rises, the overall trend of the algorithm classification accuracy is rising and obtained a classification accuracy of 90.7% and a recall of 86.8% in the end.
This is because the traditional EEG signal classification methods rely more on extracting corresponding features through experience and subjective observations, so the longer time window you choose, the higher the classification accuracy. If we choose different features, the final classification accuracy is also different. The convolutional neural network with the Inception module and the modified AlexNet module proposed in this paper both use EEG signals within 1 s time window as training samples. We obtained a final accuracy of 95.59% and 94.68%, and a recall of 96.12% and 95.32%. Punsawad et al. [32] used 4-channel EEG-based method and obtained a classification accuracy of 90.4%. Chai et al. [43] used 32-channel EEG-based system and got a classification accuracy of 88.2% and a recall of 89.7%. It can be seen from the comparison that the multi-channel EEG signal classification method is better than the single channel, as multi-channel equipment can collect more useful information. However, with the increase of datasets and the increase of classification categories, traditional classification algorithms will face problems such as too long calculation time and insufficient accuracy. For deep learning methods, with the improvement of computer computing power, we only need to produce more accurate and larger datasets, and the generalization ability and performance of the model will continue to improve. After large-scale network training, we can easily classify and warn different drivers EEG signals.
4.8 Early warning strategy module results
In order to verify the feasibility of the early warning strategy described in this paper, MATLAB is used for simulation. We assume that the driver’s EEG signal state sequence is shown in Fig. 19a. ‘1’ and ‘0’ represent the drowsy state and awake state. Figure 19b shows the vehicle driver drowsiness level obtained from the assuming EEG signal sequence through the proposed early warning strategy. ‘0’ represents awake, ‘1’ means the driver is in the first level of drowsiness, and ‘2’ means the driver is in the second level of drowsiness.
For security reasons, the early warning system cannot be tested in a real environment, so we use the OpenBCI Cyton EEG detection system and the Arduino open-source electronic platform to verify the effectiveness of the above simulation. The early warning strategy experiment is shown in Fig. 20. When the driver is in awake state, first-class drowsiness state and second-class drowsiness state, the corresponding response of the early warning equipment is that the white light is on, the red light is on, and the buzzer sounds. The experimental results are consistent with the simulation state, verifying the reliability of the early warning strategy.
5 Conclusions
In this study, the vehicle driver drowsiness detection method using wearable EEG based on convolution neural network is presented. The EEG collection module, EEG signal processing module and early warning module formed a complete system which can be used in vehicle driving safety. The final experimental results show the great performance of the proposed method in vehicle driver drowsiness detection. Specifically, the equipment provides excellent classification efficiency, and the accuracy can reach 95.59% based on a one second time window samples using neural network with Inception module and reach 94.68% using modified AlexNet network module during simulation and tests. The proposed early warning strategy is also very effective. The simulation and test results demonstrate the feasibility of the proposed drowsiness detection system using EEG signals for vehicle driver driving safety.
In our future research, we will focus on integrating all modules and embedding them into the development board. We will also conduct more in-depth research on EEG artifact removal, signal classification and real-time signal processing.
References
Statistical Communiqué of the People's Republic of China on the 2019 National Economic and Social Development, http://www.stats.gov.cn/tjsj/zxfb/202002/t20200228_1728913.html, accessed November 2020
The Prevalence and Impact of Drowsy Driving, https://aaafoundation.org/prevalence-impact-drowsy-driving/, accessed November 2020
PRECISE NUMBERS OF DROWSY-DRIVING CRASHES, INJURIES, AND FATALITIES ARE HARD TO NAIL DOWN. https://www.nhtsa.gov/risky-driving/drowsy-driving, accessed November 2020
Eyetracker Warns against Momentary Driver Drowsiness. http://www.fraunhofer.de/en/press/research-news/2010/10/eye-tracker-driver-drowsiness.html, accessed November 2020
Kaplan S, Guvensan MA, Yavuz AG, Karalurt Y (2015) Driver behavior analysis for safe driving: a survey. IEEE Trans Intell Transp Syst 16(6):3017–3032
Ullah MR, Aslam M, Ullah MI, Maria MEA (2018) Driver’s drowsiness detection through computer vision: a review. In: Mexican International Conference on Artificial Intelligence. Springer: Cham. Doi: https://doi.org/10.1007/978-3-030-02840-4_22
Bila C, Sivrikaya F, Khan MA, Albayrak S (2017) Vehicles of the future: a survey of research on safety issues. IEEE Trans Intell Transp Syst 18(5):1046–1065
Dua M, Singla R, Raj S, Jangra A (2020) Deep CNN models-based ensemble approach to driver drowsiness detection. Neural Comput Appl 33:3155–3168
Cyganek B, Gruszczynski S (2014) Hybrid computer vision system for drivers’ eye recognition and fatigue monitoring. Neurocomputing 126:78–94
Gharagozlou F, Saraji GN, Mazloumi A et al (2015) Detecting driver mental fatigue based on EEG alpha power changes during simulated driving. Iran J Public Health 44(12):1693–1700
Lin CT, Wu RC, Liang SF et al (2005) EEG-based drowsiness estimation for safety driving using independent component analysis. IEEE Trans Circuits Syst I Regul Pap 52(12):2726–2738
Liu CC, Hosking SG, Lenne MG (2009) Predicting driver drowsiness using vehicle measures: recent insights and future challenges. J Saf Res 40(4):239–245
Desai AV, Haque MA (2006) Vigilance monitoring for operator safety: a simulation study on highway driving. J Saf Res 37(2):139–147
Mortazavi A, Eskandarian A, Sayed RA (2009) Effect of drowsiness on driving performance variables of commercial vehicle drivers. Int J Automot Technol 10(3):391–404
Forsman PM, Vila BJ, Short RA et al (2013) Efficient driver drowsiness detection at moderate levels of drowsiness. Accid Anal Prev 50:341–350
Mandal B, Li L, Wang GS et al (2017) Towards detection of bus driver fatigue based on robust visual analysis of eye state. IEEE Trans Intell Transp Syst 8(3):545–557
Saradadevi M, Bajaj P (2008) Driver fatigue detection using mouth and yawning analysis. Int J Comput Sci Netw Secur 6:183–188
Cyganek B, Gruszczynski S (2013) Eye recognition in near-infrared images for driver's drowsiness monitoring. In: IEEE Intelligent Vehicles Symposium (IV), pp 397–402
Tawari A, Trivedi M (2014) Robust and continuous estimation of driver gaze zone by dynamic analysis of multiple face videos. In: IEEE Intelligent Vehicles Symposium (IV), pp 344–349
Mbouna RO, Kong SG, Chun MG (2013) Visual analysis of eye state and head pose for driver alertness monitoring. IEEE Trans Intell Transp Syst 14(3):1462–1469
Begum S (2013) Intelligent driver monitoring systems based on physiological sensor signals: a review. In: International IEEE Conference on Intelligent Transportation Systems (ITSC), pp 282–289, doi: https://doi.org/10.1109/ITSC.2013.6728246
Khushaba RN, Kodagoda S, Lal S, Dissanayake G (2011) Driver drowsiness classification using fuzzy wavelet-packet-based feature-extraction algorithm. IEEE Trans Biomed Eng 58(1):121–131
LaRocco J, Le MD, Paeng DG (2020) A systemic review of available low-cost EEG headsets used for drowsiness detection. Front Neuroinform. https://doi.org/10.3389/fninf.2020.00001
Cao Z, Chuang CH, King JK, Lin CT (2019) Multi-channel EEG recordings during a sustained-attention driving task. Sci Data 6(1):1–8
Ma Y, Zhang S, Qi D et al (2020) Driving drowsiness detection with EEG using a modified hierarchical extreme learning machine algorithm with particle swarm optimization: a pilot study. Electronics 9(5):775
Lin CT, Chang CJ, Lin BS et al (2010) A real-time wireless brain–computer interface system for drowsiness detection. IEEE Trans Biomed Circuits Syst 4(4):214–222
Chai R, Ling SH, San PP et al (2017) Improving EEG-based driver fatigue classification using sparse-deep belief networks. Front Neurosci 11:103
Yeo MVM, Li X, Shen K et al (2009) Can SVM be used for automatic EEG detection of drowsiness during car driving? Saf Sci 47(1):115–124
Gu X et al (2021) EEG-based brain-computer interfaces (BCIs): a survey of recent studies on signal sensing technologies and computational intelligence approaches and their applications. IEEE/ACM Trans Comput Biol Bioinf. https://doi.org/10.1109/TCBB.2021.3052811
Gao Z et al (2019) EEG-based spatio-temporal convolutional neural network for driver fatigue evaluation. IEEE Trans Neural Netw Learn Syst 30(9):2755–2763. https://doi.org/10.1109/TNNLS.2018.2886414
Zeng H, Yang C, Dai G et al (2018) EEG classification of driver mental states by deep learning. Cogn Neurodyn 12(6):597–606
Punsawad Y, Aempedchr S, Wongsawat Y, Panichkun M (2011) Weighted-frequency index for EEG-based mental fatigue alarm system. Int J Appl 4(1):37
Ogino M, Mitsukura Y (2018) Portable drowsiness detection through use of a prefrontal single-channel electroencephalogram. Sensors 18(12):4477
Park HJ, Oh JS, Jeong DU, Park KS (2000) Automated sleep stage scoring using hybrid rule-and case-based reasoning. Comput Biomed Res 33(5):330–349
Donoho DL, Johnstone IM (1994) Ideal spatial adaptation by wavelet shrinkage. Biometrika 81(3):425–455
Hajinoroozi M, Mao Z, Jung TP et al (2016) EEG-based prediction of driver’s cognitive performance by deep convolutional neural network. Signal Process Image Commun 47:549–555
Szegedy C, Liu W, Jia Y et al (2015) Going deeper with convolutions. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 1–9
Ioffe S, Szegedy C (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint
Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems (NIPS), pp 1097–1105
Driver Fatigue Monitor MR688, https://caredrive.dyq.cn/, accessed November 2021
Zhang XL, Li JL, Liu YG et al (2017) Design of a fatigue detection system for high-speed trains based on driver vigilance using a wireless wearable EEG. Sensors 17(3):486
Li G, Lee BL, Chung WY (2015) Smartwatch-based wearable EEG system for driver drowsiness detection. IEEE Sens J 15(12):7169–7180
Chai R, Naik GR, Nguyen TN et al (2016) Driver fatigue classification with independent component by entropy rate bound minimization analysis in an EEG-based system. IEEE J Biomed Health Inform 21(3):715–724
Wali MK, Murugappan M, Ahmmad B (2013) Wavelet packet transform based driver distraction level classification using EEG. Math Probl Eng. https://doi.org/10.1155/2013/297587
Acknowledgements
This work was supported by the National Natural Foundation of China under Grants Nos. 51975490, 51774241, and by the Science and Technology Projects of Sichuan, under Grants Nos. 2020YFSY0070, 2021JDRC0118 and 2021JDRC0096. The asterisk indicates the corresponding author.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Zhu, M., Chen, J., Li, H. et al. Vehicle driver drowsiness detection method using wearable EEG based on convolution neural network. Neural Comput & Applic 33, 13965–13980 (2021). https://doi.org/10.1007/s00521-021-06038-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-021-06038-y