Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
sensors-logo

Journal Browser

Journal Browser

EEG Signal Processing Techniques and Applications—2nd Edition

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Biomedical Sensors".

Deadline for manuscript submissions: closed (30 August 2024) | Viewed by 24178

Special Issue Editors


E-Mail Website
Guest Editor
School of Aerospace, Transport and Manufacturing, Cranfield University, Cranfield, UK
Interests: computing, simulation and modelling; human factors; industrial automation; instrumentation, sensors and measurement science; systems engineering; through-life engineering services
Special Issues, Collections and Topics in MDPI journals
Centre for Computational Science and Mathematical Modelling, Coventry University, Coventry CV1 2JH, UK
Interests: nonlinear signal processing; system identification; statistical machine learning; frequency-domain analysis; causality analysis; computational neuroscience
Special Issues, Collections and Topics in MDPI journals
School of Automation Science and Electrical Engineering, Beihang University, Beijing 100191, China
Interests: brain dynamics and brain activities; brain–computer interfaces; AI for clinical disease diagnosis; neurorehabilitation; hybrid-augmented intelligence
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Electroencephalography (EEG) is a well-established non-invasive tool to record brain electrophysiological activity. It is economical, portable, easy to administer, and widely available in most hospitals. Compared with other neuroimaging techniques that provide information about the anatomical structure (e.g., MRI, CT, and fMRI), EEG offers ultra-high time resolution, which is critical in understanding brain function. Empirical interpretation of EEG is largely based on recognizing abnormal frequencies in specific biological states, the spatial–temporal and morphological characteristics of paroxysmal or persistent discharges, reactivity to external stimuli and activation procedures, or intermittent photic stimulation. Despite being useful in many instances, these practical approaches to interpreting EEGs can leave important dynamic and nonlinear interactions between various brain network anatomical constituents undetected within the recordings, as such interactions are far beyond the observational capabilities of any specially trained physician in this field. 
This Special Issue will provide a forum for original high-quality research in EEG signal pre-processing, modeling, analysis, and applications in the time, space, frequency, or time–frequency domains. The applications of artificial intelligence and machine learning approaches in this topic are particularly welcomed. The covered applications include but are not limited to:

  • Clinical studies.
  • Human factors.
  • Brain–machine interfaces.
  • Psychology and neuroscience.
  • Social interactions.

Dr. Yifan Zhao
Dr. Fei He
Dr. Yuzhu Guo
Dr. Hua-Liang Wei
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • electroencephalography
  • EEG signal processing
  • artificial intelligence in EEG data analysis
  • brain connectivity
  • time–frequency analysis
  • deep learning in EEG data analysis
  • machine learning techniques in EEG data analysis
  • computer-aided diagnosis systems

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Related Special Issue

Published Papers (16 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 596 KiB  
Article
User Evaluation of a Shared Robot Control System Combining BCI and Eye Tracking in a Portable Augmented Reality User Interface
by Arnau Dillen, Mohsen Omidi, Fakhreddine Ghaffari, Olivier Romain, Bram Vanderborght, Bart Roelands, Ann Nowé and Kevin De Pauw
Sensors 2024, 24(16), 5253; https://doi.org/10.3390/s24165253 - 14 Aug 2024
Viewed by 680
Abstract
This study evaluates an innovative control approach to assistive robotics by integrating brain–computer interface (BCI) technology and eye tracking into a shared control system for a mobile augmented reality user interface. Aimed at enhancing the autonomy of individuals with physical disabilities, particularly those [...] Read more.
This study evaluates an innovative control approach to assistive robotics by integrating brain–computer interface (BCI) technology and eye tracking into a shared control system for a mobile augmented reality user interface. Aimed at enhancing the autonomy of individuals with physical disabilities, particularly those with impaired motor function due to conditions such as stroke, the system utilizes BCI to interpret user intentions from electroencephalography signals and eye tracking to identify the object of focus, thus refining control commands. This integration seeks to create a more intuitive and responsive assistive robot control strategy. The real-world usability was evaluated, demonstrating significant potential to improve autonomy for individuals with severe motor impairments. The control system was compared with an eye-tracking-based alternative to identify areas needing improvement. Although BCI achieved an acceptable success rate of 0.83 in the final phase, eye tracking was more effective with a perfect success rate and consistently lower completion times (p<0.001). The user experience responses favored eye tracking in 11 out of 26 questions, with no significant differences in the remaining questions, and subjective fatigue was higher with BCI use (p=0.04). While BCI performance lagged behind eye tracking, the user evaluation supports the validity of our control strategy, showing that it could be deployed in real-world conditions and suggesting a pathway for further advancements. Full article
(This article belongs to the Special Issue EEG Signal Processing Techniques and Applications—2nd Edition)
Show Figures

Figure 1

Figure 1
<p>The control strategy of the MI BCI control system. The user selects an object with their gaze and uses MI to select one of the possible actions. After accepting or rejecting the decoded MI class, the robot executes the associated action or returns to the action selection stage.</p>
Full article ">Figure 2
<p>The experimental procedure that was followed for Phase1 in (<b>a</b>) session 1 and (<b>b</b>) session 2.</p>
Full article ">Figure 3
<p>The experimental procedure that was followed for Phase 3 in (<b>a</b>) session 1, (<b>b</b>) session 2, and (<b>c</b>) session 3 with options <b>A</b> and <b>B</b>.</p>
Full article ">Figure 4
<p>(<b>a</b>) Success rate for individual participants together with the mean for each phase and (<b>b</b>) boxplots comparing the mean completion times between the eye tracking and BCI control system variants for each task.</p>
Full article ">Figure 5
<p>UEQ questions where a significant difference was found between the participant’s answers for the eye tracking and BCI variants. A score of 1 indicates that users felt more that the top term of the label was applicable, while 7 means that the bottom term was more applicable.</p>
Full article ">Figure 6
<p>mVAS scores at the beginning of each session and the end for the first two sessions and for sessions 3 before, between the first and second evaluation rounds, and at the end of the session, split by the control system that was used.</p>
Full article ">
19 pages, 6459 KiB  
Article
Detection of Pilots’ Psychological Workload during Turning Phases Using EEG Characteristics
by Li Ji, Leiye Yi, Haiwei Li, Wenjie Han and Ningning Zhang
Sensors 2024, 24(16), 5176; https://doi.org/10.3390/s24165176 - 10 Aug 2024
Viewed by 926
Abstract
Pilot behavior is crucial for aviation safety. This study aims to investigate the EEG characteristics of pilots, refine training assessment methodologies, and bolster flight safety measures. The collected EEG signals underwent initial preprocessing. The EEG characteristic analysis was performed during left and right [...] Read more.
Pilot behavior is crucial for aviation safety. This study aims to investigate the EEG characteristics of pilots, refine training assessment methodologies, and bolster flight safety measures. The collected EEG signals underwent initial preprocessing. The EEG characteristic analysis was performed during left and right turns, involving the calculation of the energy ratio of beta waves and Shannon entropy. The psychological workload of pilots during different flight phases was quantified as well. Based on the EEG characteristics, the pilots’ psychological workload was classified through the use of a support vector machine (SVM). The study results showed significant changes in the energy ratio of beta waves and Shannon entropy during left and right turns compared to the cruising phase. Additionally, the pilots’ psychological workload was found to have increased during these turning phases. Using support vector machines to detect the pilots’ psychological workload, the classification accuracy for the training set was 98.92%, while for the test set, it was 93.67%. This research holds significant importance in understanding pilots’ psychological workload. Full article
(This article belongs to the Special Issue EEG Signal Processing Techniques and Applications—2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Percentage of fatal accidents.</p>
Full article ">Figure 2
<p>The proportion of factors causing accidents.</p>
Full article ">Figure 3
<p>Professional flight simulator.</p>
Full article ">Figure 4
<p>Emotiv EPOC+ EEG cap.</p>
Full article ">Figure 5
<p>Flight simulation experiment.</p>
Full article ">Figure 6
<p>Artifact rejection—VEOG.</p>
Full article ">Figure 7
<p>EEG map before, during and after turns.</p>
Full article ">Figure 8
<p>Spherical correlation graph for left turns.</p>
Full article ">Figure 9
<p>Spherical correlation graph for right turns.</p>
Full article ">Figure 10
<p>Energy ratios across different task phases.</p>
Full article ">Figure 11
<p>EEG energy characteristics. (<b>a</b>) Beta wave energy ratio; (<b>b</b>) <span class="html-italic">β</span>/(<span class="html-italic">θ</span> + <span class="html-italic">α</span>) wave energy.</p>
Full article ">Figure 12
<p>EEG Shannon entropy during different task phases.</p>
Full article ">Figure 13
<p>EEG sample entropy during different task phases.</p>
Full article ">Figure 14
<p>NASA-TLX weight test table.</p>
Full article ">Figure 15
<p>Pearson correlation coefficients between Shannon entropy and energy ratio at various flight stages.</p>
Full article ">Figure 16
<p>Pearson correlation coefficients between sample entropy and energy ratio at various flight stages.</p>
Full article ">Figure 17
<p>EEG characteristics and psychological workload during different flight maneuvers.</p>
Full article ">Figure 18
<p>Classification results of psychological workload.</p>
Full article ">
22 pages, 6002 KiB  
Article
Latent Prototype-Based Clustering: A Novel Exploratory Electroencephalography Analysis Approach
by Sun Zhou, Pengyi Zhang and Huazhen Chen
Sensors 2024, 24(15), 4920; https://doi.org/10.3390/s24154920 - 29 Jul 2024
Viewed by 570
Abstract
Electroencephalography (EEG)-based applications in brain–computer interfaces (BCIs), neurological disease diagnosis, rehabilitation, etc., rely on supervised approaches such as classification that requires given labels. However, with the ever-increasing amount of EEG data, incomplete or incorrectly labeled or unlabeled EEG data are increasing. It likely [...] Read more.
Electroencephalography (EEG)-based applications in brain–computer interfaces (BCIs), neurological disease diagnosis, rehabilitation, etc., rely on supervised approaches such as classification that requires given labels. However, with the ever-increasing amount of EEG data, incomplete or incorrectly labeled or unlabeled EEG data are increasing. It likely degrades the performance of supervised approaches. In this work, we put forward a novel unsupervised exploratory EEG analysis solution by clustering based on low-dimensional prototypes in latent space that are associated with the respective clusters. Having the prototype as a baseline of each cluster, a compositive similarity is defined to act as the critic function in clustering, which incorporates similarities on three levels. The approach is implemented with a Generative Adversarial Network (GAN), termed W-SLOGAN, by extending the Stein Latent Optimization for GANs (SLOGAN). The Gaussian Mixture Model (GMM) is utilized as the latent distribution to adapt to the diversity of EEG signal patterns. The W-SLOGAN ensures that images generated from each Gaussian component belong to the associated cluster. The adaptively learned Gaussian mixing coefficients make the model remain effective in dealing with an imbalanced dataset. By applying the proposed approach to two public EEG or intracranial EEG (iEEG) epilepsy datasets, our experiments demonstrate that the clustering results are close to the classification of the data. Moreover, we present several findings that were discovered by intra-class clustering and cross-analysis of clustering and classification. They show that the approach is attractive in practice in the diagnosis of the epileptic subtype, multiple labelling of EEG data, etc. Full article
(This article belongs to the Special Issue EEG Signal Processing Techniques and Applications—2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Different periods of electroencephalography (EEG) signals of an epileptic. a–e denote different time points.</p>
Full article ">Figure 2
<p>Schematic of EEG clustering solution based on latent prototypes. CWT, continuous wavelet transform. DFM, deep feature map. <b><span class="html-italic">e</span></b><sub>query</sub>, latent space representation of the query signal. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">μ</mi> </mrow> <mrow> <mi mathvariant="bold-italic">k</mi> </mrow> </msub> </mrow> </semantics></math>, latent prototype of the <span class="html-italic">k</span>th cluster. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>x</mi> </mrow> <mrow> <mi mathvariant="normal">q</mi> <mi mathvariant="normal">u</mi> <mi mathvariant="normal">e</mi> <mi mathvariant="normal">r</mi> <mi mathvariant="normal">y</mi> </mrow> </msub> </mrow> </semantics></math>, scalogram of the query signal. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>x</mi> </mrow> <mrow> <mi>k</mi> </mrow> </msub> </mrow> </semantics></math>, baseline scalogram of the <span class="html-italic">k</span>th cluster. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>D</mi> <mi>F</mi> <mi>M</mi> </mrow> <mrow> <mi mathvariant="normal">q</mi> <mi mathvariant="normal">u</mi> <mi mathvariant="normal">e</mi> <mi mathvariant="normal">r</mi> <mi mathvariant="normal">y</mi> </mrow> </msub> </mrow> </semantics></math><sub>,</sub> deep feature map of the query signal. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>D</mi> <mi>F</mi> <mi>M</mi> </mrow> <mrow> <mi>k</mi> </mrow> </msub> </mrow> </semantics></math>, baseline deep feature map of the <span class="html-italic">k</span>th cluster. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>α</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>α</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>α</mi> </mrow> <mrow> <mn>3</mn> </mrow> </msub> </mrow> </semantics></math> are weights.</p>
Full article ">Figure 3
<p>Latent distribution defined as Gaussian mixture distribution and distribution of generated data and that of real data. Suppose there are three clusters in the dataset. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">μ</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">μ</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">μ</mi> </mrow> <mrow> <mn>3</mn> </mrow> </msub> </mrow> </semantics></math> can be regarded as the latent prototypes of the three clusters.</p>
Full article ">Figure 4
<p>Network architecture of W-SLOGAN. The latent distribution is defined as Gaussian mixture distribution. Assume the number of Gaussian components is 3. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">z</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">z</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">z</mi> </mrow> <mrow> <mn>3</mn> </mrow> </msub> </mrow> </semantics></math> denote the latent vectors sampled from latent space. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">e</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">e</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">e</mi> </mrow> <mrow> <mn>3</mn> </mrow> </msub> </mrow> </semantics></math> denote the encoded vectors of the scalograms calculated by the encoder. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">μ</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">μ</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> <mo>,</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">μ</mi> </mrow> <mrow> <mn>3</mn> </mrow> </msub> </mrow> </semantics></math> denote the mean vectors of the three Gaussian components, corresponding to the latent prototypes of the three clusters. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>d</mi> </mrow> <mrow> <mi>x</mi> </mrow> </msub> </mrow> </semantics></math> denotes the output of the discriminator.</p>
Full article ">Figure 5
<p>Three levels of similarity for clustering. Assume the number of Gaussian components is 3. DFM: deep feature map. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">μ</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">μ</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> <mo>,</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">μ</mi> </mrow> <mrow> <mn>3</mn> </mrow> </msub> </mrow> </semantics></math> denote the mean vectors of the three Gaussian components, corresponding to the latent prototypes of the three clusters. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">e</mi> </mrow> <mrow> <mi mathvariant="normal">q</mi> <mi mathvariant="normal">u</mi> <mi mathvariant="normal">e</mi> <mi mathvariant="normal">r</mi> <mi mathvariant="normal">y</mi> </mrow> </msub> </mrow> </semantics></math> denotes the latent representation of the query signal. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">x</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">x</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">x</mi> </mrow> <mrow> <mn>3</mn> </mrow> </msub> </mrow> </semantics></math> denote the baseline scalograms of the three clusters. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">x</mi> </mrow> <mrow> <mi mathvariant="normal">q</mi> <mi mathvariant="normal">u</mi> <mi mathvariant="normal">e</mi> <mi mathvariant="normal">r</mi> <mi mathvariant="normal">y</mi> </mrow> </msub> </mrow> </semantics></math> denotes the scalogram of the query signal. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>D</mi> <mi>F</mi> <mi>M</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>D</mi> <mi>F</mi> <mi>M</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>D</mi> <mi>F</mi> <mi>M</mi> </mrow> <mrow> <mn>3</mn> </mrow> </msub> </mrow> </semantics></math> denote the baseline deep feature maps of the three clusters. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>D</mi> <mi>F</mi> <mi>M</mi> </mrow> <mrow> <mi mathvariant="normal">q</mi> <mi mathvariant="normal">u</mi> <mi mathvariant="normal">e</mi> <mi mathvariant="normal">r</mi> <mi mathvariant="normal">y</mi> </mrow> </msub> </mrow> </semantics></math> denotes the deep feature map of the query signal.</p>
Full article ">Figure 6
<p>Clustering results and intra-class diversity. (<b>A1</b>–<b>A3</b>) show the probability density functions for samples belonging to Cluster 1, Cluster 2, and Cluster 3, respectively. (<b>B</b>) shows the probability density function of Class AB samples clustered into Cluster 1, several high-probability samples with their scalograms (in the upper row), and several low-probability samples with their respective scalograms (in the lower row). (<b>C</b>) shows the probability density functions of Class CD samples clustered into Cluster 3, several high-probability samples with their scalograms, and several low-probability samples with their scalograms (in the lower row).</p>
Full article ">Figure 7
<p>Purity, ARI, and NMI of the results of clustering on four groups of EEG/intracranial EEG (iEEG) data of the Bonn dataset separately using different kinds of similarities.</p>
Full article ">Figure 8
<p>Purity, ARI, and NMI of the results of clustering on three epileptic subjects of ECoG data of the HUP dataset separately using different kinds of similarities.</p>
Full article ">Figure 9
<p>Impact of the iteration number during training W-SLOGAN on the clustering performance on four groups of EEG data of the Bonn dataset separately evaluated with Purity, ARI, and NMI.</p>
Full article ">Figure 10
<p>Impact of the iteration number duringtraining W-SLOGAN on the clustering performance on three epileptic subjects of ECoG data of the HUP dataset separately evaluated with Purity, ARI and NMI.</p>
Full article ">Figure 11
<p>Typical kinds of epileptiform waveforms were found by clustering the ictal iEEG data of the Bonn dataset. In each row are displayed the characteristic waveform of a type of epileptiform discharge, three epileptiform waves of that type that were clustered into a same cluster found from the iEEG recordings by our approach, as well as the baseline scalogram of that cluster.</p>
Full article ">Figure 12
<p>Class labels and clustering results of several samples in group AB_CD_E of the Bonn dataset. Samples on each row belong to a same class and those on each column are clustered into a same cluster. Each grid displays four samples. Row 1 and column 1 both correspond to Class AB, i.e., healthy; Row 2 and column 2 both correspond to Class CD, i.e., inter-ictal, epileptic; Row 3 and column 3 both correspond to Class E, i.e., ictal, epileptic.</p>
Full article ">
19 pages, 2917 KiB  
Article
An Innovative EEG-Based Pain Identification and Quantification: A Pilot Study
by Colince Meli Segning, Rubens A. da Silva and Suzy Ngomo
Sensors 2024, 24(12), 3873; https://doi.org/10.3390/s24123873 - 14 Jun 2024
Viewed by 874
Abstract
Objective: The present pilot study aimed to propose an innovative scale-independent measure based on electroencephalographic (EEG) signals for the identification and quantification of the magnitude of chronic pain. Methods: EEG data were collected from three groups of participants at rest: seven healthy participants [...] Read more.
Objective: The present pilot study aimed to propose an innovative scale-independent measure based on electroencephalographic (EEG) signals for the identification and quantification of the magnitude of chronic pain. Methods: EEG data were collected from three groups of participants at rest: seven healthy participants with pain, 15 healthy participants submitted to thermal pain, and 66 participants living with chronic pain. Every 30 s, the pain intensity score felt by the participant was also recorded. Electrodes positioned in the contralateral motor region were of interest. After EEG preprocessing, a complex analytical signal was obtained using Hilbert transform, and the upper envelope of the EEG signal was extracted. The average coefficient of variation of the upper envelope of the signal was then calculated for the beta (13–30 Hz) band and proposed as a new EEG-based indicator, namely Piqβ, to identify and quantify pain. Main results: The main results are as follows: (1) A Piqβ threshold at 10%, that is, Piqβ ≥ 10%, indicates the presence of pain, and (2) the higher the Piqβ (%), the higher the extent of pain. Conclusions: This finding indicates that Piqβ can objectively identify and quantify pain in a population living with chronic pain. This new EEG-based indicator can be used for objective pain assessment based on the neurophysiological body response to pain. Significance: Objective pain assessment is a valuable decision-making aid and an important contribution to pain management and monitoring. Full article
(This article belongs to the Special Issue EEG Signal Processing Techniques and Applications—2nd Edition)
Show Figures

Figure 1

Figure 1
<p>The thermal stimulus kit.</p>
Full article ">Figure 2
<p>Experimental design—Thermal stimulus, pain score, and EEG recording during 300 s or 5 min.</p>
Full article ">Figure 3
<p>Methodological approach from the filtering of the EEG signal, through the estimation of the coefficient of variation of the upper envelope in beta (CVUE<sub>β</sub>), to the calculation of pain identification and quantification (Piq<sub>β</sub>).</p>
Full article ">Figure 4
<p>Methodological steps showing the detail of the application of the Hilbert transform until the extraction of the upper envelope. (<b>a</b>)—original real-valued signal, (<b>b</b>)—real and imaginary parts of analytic signal, (<b>c</b>)—superposition of real and imaginary parts of analytic signal and upper envelope of original signal.</p>
Full article ">Figure 5
<p>Normalized mean [0–1] for all participants (<span class="html-italic">n</span> = 15) of the three variables: (1) Normalized pain score intensity (black dotted line curve), (2) normalized level of pain stimulus (grid curve), and (3) normalized pain identification and quantification in beta frequency band (Piq<sub>β</sub>) (black curve).</p>
Full article ">Figure 6
<p>Scatter plot—Piq<sub>β</sub> indicator and pain score. 100% of participants living with chronic pain show a Piq<sub>β</sub> ≥ 10%. The two solid points represent participants who reported a pain score lower than 1/10 but had a Piq<sub>β</sub> indicator ≥10%. The hollow points represent participants whose pain scores are consistent with their Piq<sub>β</sub> indicator values.</p>
Full article ">
17 pages, 5820 KiB  
Article
Detection Method of Epileptic Seizures Using a Neural Network Model Based on Multimodal Dual-Stream Networks
by Baiyang Wang, Yidong Xu, Siyu Peng, Hongjun Wang and Fang Li
Sensors 2024, 24(11), 3360; https://doi.org/10.3390/s24113360 - 24 May 2024
Cited by 1 | Viewed by 818
Abstract
Epilepsy is a common neurological disorder, and its diagnosis mainly relies on the analysis of electroencephalogram (EEG) signals. However, the raw EEG signals contain limited recognizable features, and in order to increase the recognizable features in the input of the network, the differential [...] Read more.
Epilepsy is a common neurological disorder, and its diagnosis mainly relies on the analysis of electroencephalogram (EEG) signals. However, the raw EEG signals contain limited recognizable features, and in order to increase the recognizable features in the input of the network, the differential features of the signals, the amplitude spectrum and the phase spectrum in the frequency domain are extracted to form a two-dimensional feature vector. In order to solve the problem of recognizing multimodal features, a neural network model based on a multimodal dual-stream network is proposed, which uses a mixture of one-dimensional convolution, two-dimensional convolution and LSTM neural networks to extract the spatial features of the EEG two-dimensional vectors and the temporal features of the signals, respectively, and combines the advantages of the two networks, using the hybrid neural network to extract both the temporal and spatial features of the signals at the same time. In addition, a channel attention module was used to focus the model on features related to seizures. Finally, multiple sets of experiments were conducted on the Bonn and New Delhi data sets, and the highest accuracy rates of 99.69% and 97.5% were obtained on the test set, respectively, verifying the superiority of the proposed model in the task of epileptic seizure detection. Full article
(This article belongs to the Special Issue EEG Signal Processing Techniques and Applications—2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Bonn EEG Dataset EEG visualization.</p>
Full article ">Figure 2
<p>New Delhi EEG Dataset EEG Visualization.</p>
Full article ">Figure 3
<p>EEG signal processing process.</p>
Full article ">Figure 4
<p>One-dimensional convolution principle.</p>
Full article ">Figure 5
<p>Two-dimensional convolution principle.</p>
Full article ">Figure 6
<p>LSTM structure.</p>
Full article ">Figure 7
<p>The overall process involved in the detection method for epileptic seizures.</p>
Full article ">Figure 8
<p>The training accuracy and loss function of the proposed network, as well as the removal of the LSTM module, the removal of the two-dimensional convolution module, and the accuracy and loss functions of the ablation experiment in the Bonn data set by removing the LSTM and two-dimensional convolution module at the same time.</p>
Full article ">Figure 9
<p>The proposed network removes the LSTM module, removes the two-dimensional convolution module, and simultaneously removes the LSTM and two-dimensional convolution module. Ablation experimental performance on the Bonn dataset: (<b>a</b>) highest training accuracy (<b>b</b>) loss function.</p>
Full article ">Figure 10
<p>The proposed network removes the LSTM module, removes the two-dimensional convolution module, and simultaneously removes the LSTM and two-dimensional convolution modules in the Bonn test set ablation experiment. <span class="html-italic">Accuracy</span>, <span class="html-italic">precision</span>, <span class="html-italic">recall</span>, and <span class="html-italic">F</span>1-<span class="html-italic">score</span> are shown.</p>
Full article ">Figure 11
<p>The proposed network, the LSTM module is removed, the two-dimensional convolution module is removed, and the LSTM and two-dimensional convolution modules are removed at the same time, and the confusion matrix of the Bonn test set ablation experiment is shown.</p>
Full article ">Figure 12
<p>The proposed network removes the LSTM module, removes the two-dimensional convolution module, removes both the LSTM and the two-dimensional convolution module, and performs cluster analysis on the Bonn test set ablation experiment.</p>
Full article ">Figure 13
<p>Performance of the proposed network on the New Delhi dataset: (<b>a</b>) accuracy (<b>b</b>) loss function.</p>
Full article ">Figure 14
<p>Confusion matrix and cluster analysis of the proposed network on New Delhi dataset.</p>
Full article ">
29 pages, 5473 KiB  
Article
Optimal Channel Selection of Multiclass Motor Imagery Classification Based on Fusion Convolutional Neural Network with Attention Blocks
by Joharah Khabti, Saad AlAhmadi and Adel Soudani
Sensors 2024, 24(10), 3168; https://doi.org/10.3390/s24103168 - 16 May 2024
Viewed by 820
Abstract
The widely adopted paradigm in brain–computer interfaces (BCIs) involves motor imagery (MI), enabling improved communication between humans and machines. EEG signals derived from MI present several challenges due to their inherent characteristics, which lead to a complex process of classifying and finding the [...] Read more.
The widely adopted paradigm in brain–computer interfaces (BCIs) involves motor imagery (MI), enabling improved communication between humans and machines. EEG signals derived from MI present several challenges due to their inherent characteristics, which lead to a complex process of classifying and finding the potential tasks of a specific participant. Another issue is that BCI systems can result in noisy data and redundant channels, which in turn can lead to increased equipment and computational costs. To address these problems, the optimal channel selection of a multiclass MI classification based on a Fusion convolutional neural network with Attention blocks (FCNNA) is proposed. In this study, we developed a CNN model consisting of layers of convolutional blocks with multiple spatial and temporal filters. These filters are designed specifically to capture the distribution and relationships of signal features across different electrode locations, as well as to analyze the evolution of these features over time. Following these layers, a Convolutional Block Attention Module (CBAM) is used to, further, enhance EEG signal feature extraction. In the process of channel selection, the genetic algorithm is used to select the optimal set of channels using a new technique to deliver fixed as well as variable channels for all participants. The proposed methodology is validated showing 6.41% improvement in multiclass classification compared to most baseline models. Notably, we achieved the highest results of 93.09% for binary classes involving left-hand and right-hand movements. In addition, the cross-subject strategy for multiclass classification yielded an impressive accuracy of 68.87%. Following channel selection, multiclass classification accuracy was enhanced, reaching 84.53%. Overall, our experiments illustrated the efficiency of the proposed EEG MI model in both channel selection and classification, showing superior results with either a full channel set or a reduced number of channels. Full article
(This article belongs to the Special Issue EEG Signal Processing Techniques and Applications—2nd Edition)
Show Figures

Figure 1

Figure 1
<p>ConvNet structure. C = number of channels; T = number of time points; KE = kernel width; F1, and F2 = number of filters.</p>
Full article ">Figure 2
<p>The timing scheme of the BCI IV 2a dataset.</p>
Full article ">Figure 3
<p>Overall framework.</p>
Full article ">Figure 4
<p>The structure of the FCNNA model.</p>
Full article ">Figure 5
<p>CBAM.</p>
Full article ">Figure 6
<p>Genetic algorithm applied in our model.</p>
Full article ">Figure 7
<p>Confusion matrix of proposed model applied on 4 classes in BCI IV 2a using within-subject strategy.</p>
Full article ">Figure 8
<p>Optimal channels were selected based on GA and cross-subject classification after testing each subject individually. The highlighted electrodes indicate the positions of the selected channels for each subject.</p>
Full article ">Figure 9
<p>The average number of channels (electrodes) selected for all subjects after applying the genetic algorithm through cross-subject classification.</p>
Full article ">Figure 10
<p>A comparison of our work with the state-of-the-art research in terms of accuracy and the number of channels used, as discussed in Hassanpour et al. (2019) [<a href="#B18-sensors-24-03168" class="html-bibr">18</a>], Tiwari et al. (2023) [<a href="#B19-sensors-24-03168" class="html-bibr">19</a>], Mahamune et al. (2023) [<a href="#B21-sensors-24-03168" class="html-bibr">21</a>], and Chen et al. (2020) [<a href="#B22-sensors-24-03168" class="html-bibr">22</a>].</p>
Full article ">Figure 11
<p>ROC curves and AUC for each subject across different methods: within-subject with all channels (“Within-subject”), cross-subject, within-subject with fixed channel selection (“Fixed Channels”), and within-subject with variable channel selection (“Variable Channels”). The dotted black lines reflect the performance of a random predictor, serving as a reference for comparing the classification performance of the four methods.</p>
Full article ">Figure A1
<p>Confusion matrix of proposed model applied on 2 classes of left and right hand in BCI IV 2a using within-subject strategy.</p>
Full article ">Figure A2
<p>Confusion matrix of proposed model applied on 2 classes of both feet and tongue in BCI IV 2a using within-subject strategy.</p>
Full article ">
20 pages, 2552 KiB  
Article
Identifying the Effect of Cognitive Motivation with the Method Based on Temporal Association Rule Mining Concept
by Tustanah Phukhachee, Suthathip Maneewongvatana, Chayapol Chaiyanan, Keiji Iramina and Boonserm Kaewkamnerdpong
Sensors 2024, 24(9), 2857; https://doi.org/10.3390/s24092857 - 30 Apr 2024
Cited by 1 | Viewed by 840
Abstract
Being motivated has positive influences on task performance. However, motivation could result from various motives that affect different parts of the brain. Analyzing the motivation effect from all affected areas requires a high number of EEG electrodes, resulting in high cost, inflexibility, and [...] Read more.
Being motivated has positive influences on task performance. However, motivation could result from various motives that affect different parts of the brain. Analyzing the motivation effect from all affected areas requires a high number of EEG electrodes, resulting in high cost, inflexibility, and burden to users. In various real-world applications, only the motivation effect is required for performance evaluation regardless of the motive. Analyzing the relationships between the motivation-affected brain areas associated with the task’s performance could limit the required electrodes. This study introduced a method to identify the cognitive motivation effect with a reduced number of EEG electrodes. The temporal association rule mining (TARM) concept was used to analyze the relationships between attention and memorization brain areas under the effect of motivation from the cognitive motivation task. For accuracy improvement, the artificial bee colony (ABC) algorithm was applied with the central limit theorem (CLT) concept to optimize the TARM parameters. From the results, our method can identify the motivation effect with only FCz and P3 electrodes, with 74.5% classification accuracy on average with individual tests. Full article
(This article belongs to the Special Issue EEG Signal Processing Techniques and Applications—2nd Edition)
Show Figures

Figure 1

Figure 1
<p>The processes of the cognitive motivation effect identification method proposed in this study: (<b>A</b>) the model construction part and (<b>B</b>) the identification part.</p>
Full article ">Figure 2
<p>[<b>Top</b>] The preprocessing steps from EEG to discretized sequences with [<b>Bottom</b>] the examples of data output from each step: (A) EEG segment, (B) ERSP, (C) smoothened alpha band (8−12 Hz) ERSP, and (D) discretized trend sequences.</p>
Full article ">Figure 3
<p>An example of before relationships.</p>
Full article ">Figure 4
<p>An example of contain relationships.</p>
Full article ">Figure 5
<p>An example of overlap relationships.</p>
Full article ">Figure 6
<p>The processes to identify the cognitive motivation effect. The parameters and model required to be optimized for each step were listed under each corresponding red label.</p>
Full article ">Figure 7
<p>The detailed optimization part of the cognitive motivation effect identification method implemented with the ABC algorithm.</p>
Full article ">Figure 8
<p>An example of temporal ERSP trend data of an RR epoch with a before relationship. The arrow marked the occurrence time of the corresponding head model.</p>
Full article ">
21 pages, 840 KiB  
Article
Using Explainable Artificial Intelligence to Obtain Efficient Seizure-Detection Models Based on Electroencephalography Signals
by Jusciaane Chacon Vieira, Luiz Affonso Guedes, Mailson Ribeiro Santos and Ignacio Sanchez-Gendriz
Sensors 2023, 23(24), 9871; https://doi.org/10.3390/s23249871 - 16 Dec 2023
Cited by 2 | Viewed by 1519
Abstract
Epilepsy is a condition that affects 50 million individuals globally, significantly impacting their quality of life. Epileptic seizures, a transient occurrence, are characterized by a spectrum of manifestations, including alterations in motor function and consciousness. These events impose restrictions on the daily lives [...] Read more.
Epilepsy is a condition that affects 50 million individuals globally, significantly impacting their quality of life. Epileptic seizures, a transient occurrence, are characterized by a spectrum of manifestations, including alterations in motor function and consciousness. These events impose restrictions on the daily lives of those affected, frequently resulting in social isolation and psychological distress. In response, numerous efforts have been directed towards the detection and prevention of epileptic seizures through EEG signal analysis, employing machine learning and deep learning methodologies. This study presents a methodology that reduces the number of features and channels required by simpler classifiers, leveraging Explainable Artificial Intelligence (XAI) for the detection of epileptic seizures. The proposed approach achieves performance metrics exceeding 95% in accuracy, precision, recall, and F1-score by utilizing merely six features and five channels in a temporal domain analysis, with a time window of 1 s. The model demonstrates robust generalization across the patient cohort included in the database, suggesting that feature reduction in simpler models—without resorting to deep learning—is adequate for seizure detection. The research underscores the potential for substantial reductions in the number of attributes and channels, advocating for the training of models with strategically selected electrodes, and thereby supporting the development of effective mobile applications for epileptic seizure detection. Full article
(This article belongs to the Special Issue EEG Signal Processing Techniques and Applications—2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Shap summary plot [<a href="#B33-sensors-23-09871" class="html-bibr">33</a>].</p>
Full article ">Figure 2
<p>The first phase activity sequence of the proposed approach.</p>
Full article ">Figure 3
<p>The international 10-20 system of electrode placement.</p>
Full article ">Figure 4
<p>SHAP value—phase 1.</p>
Full article ">Figure 5
<p>Phase 1—obtained performance.</p>
Full article ">Figure 6
<p>SHAP value—phase 2.</p>
Full article ">Figure 7
<p>Phase 2—performance.</p>
Full article ">Figure 8
<p>Phase 1 and 2 accuracies (%).</p>
Full article ">
20 pages, 2917 KiB  
Article
Illuminating the Neural Landscape of Pilot Mental States: A Convolutional Neural Network Approach with Shapley Additive Explanations Interpretability
by Ibrahim Alreshidi, Desmond Bisandu and Irene Moulitsas
Sensors 2023, 23(22), 9052; https://doi.org/10.3390/s23229052 - 8 Nov 2023
Cited by 2 | Viewed by 1211
Abstract
Predicting pilots’ mental states is a critical challenge in aviation safety and performance, with electroencephalogram data offering a promising avenue for detection. However, the interpretability of machine learning and deep learning models, which are often used for such tasks, remains a significant issue. [...] Read more.
Predicting pilots’ mental states is a critical challenge in aviation safety and performance, with electroencephalogram data offering a promising avenue for detection. However, the interpretability of machine learning and deep learning models, which are often used for such tasks, remains a significant issue. This study aims to address these challenges by developing an interpretable model to detect four mental states—channelised attention, diverted attention, startle/surprise, and normal state—in pilots using EEG data. The methodology involves training a convolutional neural network on power spectral density features of EEG data from 17 pilots. The model’s interpretability is enhanced via the use of SHapley Additive exPlanations values, which identify the top 10 most influential features for each mental state. The results demonstrate high performance in all metrics, with an average accuracy of 96%, a precision of 96%, a recall of 94%, and an F1 score of 95%. An examination of the effects of mental states on EEG frequency bands further elucidates the neural mechanisms underlying these states. The innovative nature of this study lies in its combination of high-performance model development, improved interpretability, and in-depth analysis of the neural correlates of mental states. This approach not only addresses the critical need for effective and interpretable mental state detection in aviation but also contributes to our understanding of the neural underpinnings of these states. This study thus represents a significant advancement in the field of EEG-based mental state detection. Full article
(This article belongs to the Special Issue EEG Signal Processing Techniques and Applications—2nd Edition)
Show Figures

Figure 1

Figure 1
<p>An overview of the proposed approach.</p>
Full article ">Figure 2
<p>The average power in each frequency band across pilots.</p>
Full article ">Figure 3
<p>Heatmap for the average power in each frequency band for EEG channels.</p>
Full article ">Figure 4
<p>Training accuracy and loss curves of the proposed model.</p>
Full article ">Figure 5
<p>Confusion matrix of the proposed approach.</p>
Full article ">Figure 6
<p>Top 10 important features for NE class.</p>
Full article ">Figure 7
<p>Top 10 important features for SS class.</p>
Full article ">Figure 8
<p>Top 10 important features for CA class.</p>
Full article ">Figure 9
<p>Top 10 important features for DA class.</p>
Full article ">
16 pages, 519 KiB  
Article
Domain-Specific Processing Stage for Estimating Single-Trail Evoked Potential Improves CNN Performance in Detecting Error Potential
by Andrea Farabbi and Luca Mainardi
Sensors 2023, 23(22), 9049; https://doi.org/10.3390/s23229049 - 8 Nov 2023
Cited by 1 | Viewed by 942
Abstract
We present a novel architecture designed to enhance the detection of Error Potential (ErrP) signals during ErrP stimulation tasks. In the context of predicting ErrP presence, conventional Convolutional Neural Networks (CNNs) typically accept a raw EEG signal as input, encompassing both the information [...] Read more.
We present a novel architecture designed to enhance the detection of Error Potential (ErrP) signals during ErrP stimulation tasks. In the context of predicting ErrP presence, conventional Convolutional Neural Networks (CNNs) typically accept a raw EEG signal as input, encompassing both the information associated with the evoked potential and the background activity, which can potentially diminish predictive accuracy. Our approach involves advanced Single-Trial (ST) ErrP enhancement techniques for processing raw EEG signals in the initial stage, followed by CNNs for discerning between ErrP and NonErrP segments in the second stage. We tested different combinations of methods and CNNs. As far as ST ErrP estimation is concerned, we examined various methods encompassing subspace regularization techniques, Continuous Wavelet Transform, and ARX models. For the classification stage, we evaluated the performance of EEGNet, CNN, and a Siamese Neural Network. A comparative analysis against the method of directly applying CNNs to raw EEG signals revealed the advantages of our architecture. Leveraging subspace regularization yielded the best improvement in classification metrics, at up to 14% in balanced accuracy and 13.4% in F1-score. Full article
(This article belongs to the Special Issue EEG Signal Processing Techniques and Applications—2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Pipeline of the employed methods. The novelty of our approach consists in the introduction of a Single Trial Extraction block between the EEG preprocessing stage and the NN classification stage.</p>
Full article ">Figure 2
<p>Performance metrics on the test set when using EEGNet as a classifier. The metrics are reported for each processing technique and in both subject-wise and population-wise analysis. The balanced accuracy and <span class="html-italic">F1-score</span> are reported for completeness.</p>
Full article ">Figure 3
<p>Utility Gain when using EEGNet as a classifier and applying different techniques for ST estimation. The metric is reported for the subject-wise and population-wise analysis for only <span class="html-italic">p</span> values over <math display="inline"><semantics> <mrow> <mn>70</mn> <mo>%</mo> </mrow> </semantics></math>, as no major differences were observed outside of this range.</p>
Full article ">Figure 4
<p>Performance metrics on the test set when using L-CNN as a classifier. The metrics are reported for each processing technique and in both the subject-wise and population-wise analysis. Balanced accuracy and <span class="html-italic">F1-score</span> are reported for completeness.</p>
Full article ">Figure 5
<p>Utility gain obtained using L-CNN as a classifier and applying different ST estimation techniques. The metric is reported for subject-wise and population-wise analysis and for <span class="html-italic">p</span> values over <math display="inline"><semantics> <mrow> <mn>70</mn> <mo>%</mo> </mrow> </semantics></math>, as no major differences were observed below this level.</p>
Full article ">Figure 6
<p>Performance metrics on the test set when using a Siamese Neural Network as the classifier. The metrics are reported for each processing technique and for subject-wise and population-wise analysis. Balanced accuracy and <span class="html-italic">F1-score</span> are reported for completeness.</p>
Full article ">Figure 7
<p>Utility gain obtained using a Siamese Neural Network as the classifier and applying different ST estimation techniques. The metric is reported for the subject-wise and population-wise analysis and for only those <span class="html-italic">p</span> values over <math display="inline"><semantics> <mrow> <mn>70</mn> <mo>%</mo> </mrow> </semantics></math>, as no major differences were observed below this level.</p>
Full article ">
21 pages, 2109 KiB  
Article
Graph Analysis of TMS–EEG Connectivity Reveals Hemispheric Differences following Occipital Stimulation
by Ilaria Siviero, Davide Bonfanti, Gloria Menegaz, Silvia Savazzi, Chiara Mazzi and Silvia Francesca Storti
Sensors 2023, 23(21), 8833; https://doi.org/10.3390/s23218833 - 30 Oct 2023
Cited by 2 | Viewed by 1953
Abstract
(1) Background: Transcranial magnetic stimulation combined with electroencephalography (TMS–EEG) provides a unique opportunity to investigate brain connectivity. However, possible hemispheric asymmetries in signal propagation dynamics following occipital TMS have not been investigated. (2) Methods: Eighteen healthy participants underwent occipital single-pulse TMS at two [...] Read more.
(1) Background: Transcranial magnetic stimulation combined with electroencephalography (TMS–EEG) provides a unique opportunity to investigate brain connectivity. However, possible hemispheric asymmetries in signal propagation dynamics following occipital TMS have not been investigated. (2) Methods: Eighteen healthy participants underwent occipital single-pulse TMS at two different EEG sites, corresponding to early visual areas. We used a state-of-the-art Bayesian estimation approach to accurately estimate TMS-evoked potentials (TEPs) from EEG data, which has not been previously used in this context. To capture the rapid dynamics of information flow patterns, we implemented a self-tuning optimized Kalman (STOK) filter in conjunction with the information partial directed coherence (iPDC) measure, enabling us to derive time-varying connectivity matrices. Subsequently, graph analysis was conducted to assess key network properties, providing insight into the overall network organization of the brain network. (3) Results: Our findings revealed distinct lateralized effects on effective brain connectivity and graph networks after TMS stimulation, with left stimulation facilitating enhanced communication between contralateral frontal regions and right stimulation promoting increased intra-hemispheric ipsilateral connectivity, as evidenced by statistical test (p < 0.001). (4) Conclusions: The identified hemispheric differences in terms of connectivity provide novel insights into brain networks involved in visual information processing, revealing the hemispheric specificity of neural responses to occipital stimulation. Full article
(This article belongs to the Special Issue EEG Signal Processing Techniques and Applications—2nd Edition)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Electrode placement and schematic representation of the experimental procedure: (i) random interval (700–1000 ms), (ii) single-pulse TMS stimulation, (iii) phosphene awareness assessment (up to 2000 ms), and (iv) inter-trial interval (1300 ms). (<b>b</b>) Signal processing pipeline for assessing the differences between hemispheres after TMS stimulation. The proposed pipeline is divided into preprocessing, TMS-evoked potential estimation, the time-varying effective connectivity calculation, and the graph network.</p>
Full article ">Figure 2
<p>Global mean field power calculated for TEP using the conventional averaging (CA) method (blue) and the Bayesian smoothing approach (red) obtained from 360 sweeps. The upper part referred to the left TMS stimulation and the lower part to the right TMS stimulation. Statistical significance differences between the CA method and the Bayesian smoothing are indicated by gray areas (Wilcoxon signed-rank test Bonferroni corrected, <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> </mrow> </semantics></math> 0.05). Time is measured in ms and the amplitude is in <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>V.</p>
Full article ">Figure 3
<p>Time-varying brain connectivity analysis for stimulation sites. (<b>a</b>) TMS on O1; (<b>b</b>) TMS on O2; (<b>c</b>) statistical significance differences between conditions are indicated by arrows (Wilcoxon signed-rank test uncorrected <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> </mrow> </semantics></math> 0.001). Red arrows indicate stronger connections following right stimulation (O2) than left stimulation (O1). By contrast, blue arrows indicate stronger connections following left stimulation (O1) than right stimulation (O2).</p>
Full article ">Figure 4
<p>Time-varying edges betweenness centrality for stimulation sites. (<b>a</b>) Left (O1) TMS stimulation; (<b>b</b>) right (O2) TMS stimulation; (<b>c</b>) statistical significance differences between conditions are indicated by arrows (Wilcoxon signed-rank test uncorrected <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> </mrow> </semantics></math> 0.001). Red arrows indicate stronger values of the edges following right stimulation (O2) than left stimulation (O1). By contrast, blue arrows indicate stronger values of the edge following left stimulation (O1) than right stimulation (O2).</p>
Full article ">Figure 5
<p>(<b>a</b>) Electrode placement using the international 10-10 system covering the ipsilateral frontal channels (highlighted in blue), contralateral frontal channels (highlighted in green), ipsilateral occipital channels (highlighted in yellow), and contralateral occipital channels (highlighted in orange). (<b>b</b>) Degree metric of graph networks in response to left vs. right TMS. Statistical significant differences between conditions are indicated by asterisks above boxplots (Wilcoxon signed-rank test uncorrected <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> </mrow> </semantics></math> 0.001).</p>
Full article ">
18 pages, 2099 KiB  
Article
Depressive Disorder Recognition Based on Frontal EEG Signals and Deep Learning
by Yanting Xu, Hongyang Zhong, Shangyan Ying, Wei Liu, Guibin Chen, Xiaodong Luo and Gang Li
Sensors 2023, 23(20), 8639; https://doi.org/10.3390/s23208639 - 23 Oct 2023
Cited by 4 | Viewed by 2142
Abstract
Depressive disorder (DD) has become one of the most common mental diseases, seriously endangering both the affected person’s psychological and physical health. Nowadays, a DD diagnosis mainly relies on the experience of clinical psychiatrists and subjective scales, lacking objective, accurate, practical, and automatic [...] Read more.
Depressive disorder (DD) has become one of the most common mental diseases, seriously endangering both the affected person’s psychological and physical health. Nowadays, a DD diagnosis mainly relies on the experience of clinical psychiatrists and subjective scales, lacking objective, accurate, practical, and automatic diagnosis technologies. Recently, electroencephalogram (EEG) signals have been widely applied for DD diagnosis, but mainly with high-density EEG, which can severely limit the efficiency of the EEG data acquisition and reduce the practicability of diagnostic techniques. The current study attempts to achieve accurate and practical DD diagnoses based on combining frontal six-channel electroencephalogram (EEG) signals and deep learning models. To this end, 10 min clinical resting-state EEG signals were collected from 41 DD patients and 34 healthy controls (HCs). Two deep learning models, multi-resolution convolutional neural network (MRCNN) combined with long short-term memory (LSTM) (named MRCNN-LSTM) and MRCNN combined with residual squeeze and excitation (RSE) (named MRCNN-RSE), were proposed for DD recognition. The results of this study showed that the higher EEG frequency band obtained the better classification performance for DD diagnosis. The MRCNN-RSE model achieved the highest classification accuracy of 98.48 ± 0.22% with 8–30 Hz EEG signals. These findings indicated that the proposed analytical framework can provide an accurate and practical strategy for DD diagnosis, as well as essential theoretical and technical support for the treatment and efficacy evaluation of DD. Full article
(This article belongs to the Special Issue EEG Signal Processing Techniques and Applications—2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Clinical characteristics of DD and HC. * means <span class="html-italic">p</span> &lt; 0.05, and ns means non significant.</p>
Full article ">Figure 2
<p>Sixteen EEG channels’ names and locations; red dots indicate the frontal six channels selected for this study.</p>
Full article ">Figure 3
<p>MRCNN-LSTM model architecture.</p>
Full article ">Figure 4
<p>MRCNN architecture.</p>
Full article ">Figure 5
<p>Feature calibration module with RSE block in MRCNN-RSE model.</p>
Full article ">Figure 6
<p>Accuracy, precision, recall, and weighted F1-score using EEG to identify patients with depressive disorder.</p>
Full article ">Figure 7
<p>The comparison results of 4–30 Hz, 8–30 Hz, 10–30 Hz, and 13–30 Hz EEG signals for DD diagnosis based on MRCNN-RSE model. * represents <span class="html-italic">p</span> &lt; 0.05, and ns represents no statistical difference between the two groups.</p>
Full article ">
16 pages, 4776 KiB  
Article
Characterisation of Cognitive Load Using Machine Learning Classifiers of Electroencephalogram Data
by Qi Wang, Daniel Smythe, Jun Cao, Zhilin Hu, Karl J. Proctor, Andrew P. Owens and Yifan Zhao
Sensors 2023, 23(20), 8528; https://doi.org/10.3390/s23208528 - 17 Oct 2023
Cited by 4 | Viewed by 1978
Abstract
A high cognitive load can overload a person, potentially resulting in catastrophic accidents. It is therefore important to ensure the level of cognitive load associated with safety-critical tasks (such as driving a vehicle) remains manageable for drivers, enabling them to respond appropriately to [...] Read more.
A high cognitive load can overload a person, potentially resulting in catastrophic accidents. It is therefore important to ensure the level of cognitive load associated with safety-critical tasks (such as driving a vehicle) remains manageable for drivers, enabling them to respond appropriately to changes in the driving environment. Although electroencephalography (EEG) has attracted significant interest in cognitive load research, few studies have used EEG to investigate cognitive load in the context of driving. This paper presents a feasibility study on the simulation of various levels of cognitive load through designing and implementing four driving tasks. We employ machine learning-based classification techniques using EEG recordings to differentiate driving conditions. An EEG dataset containing these four driving tasks from a group of 20 participants was collected to investigate whether EEG can be used as an indicator of changes in cognitive load. The collected dataset was used to train four Deep Neural Networks and four Support Vector Machine classification models. The results showed that the best model achieved a classification accuracy of 90.37%, utilising statistical features from multiple frequency bands in 24 EEG channels. Furthermore, the Gamma and Beta bands achieved higher classification accuracy than the Alpha and Theta bands during the analysis. The outcomes of this study have the potential to enhance the Human–Machine Interface of vehicles, contributing to improved safety. Full article
(This article belongs to the Special Issue EEG Signal Processing Techniques and Applications—2nd Edition)
Show Figures

Figure 1

Figure 1
<p>A block diagram displaying the four steps of the overall methodology.</p>
Full article ">Figure 2
<p>Images displaying the experimental setup and a map showing routes followed by participants. (<b>a</b>) Complete sets of sensors and their locations. (<b>b</b>) Driver simulator setup. (<b>c</b>) Display of the EEG sensor device. (<b>d</b>) Route map of the experiment; the red route shows the motorway route, and the blue route shows the urban route.</p>
Full article ">Figure 3
<p>The cognitive load of driving tasks rated by participants.</p>
Full article ">Figure 4
<p>The architecture of the DNN model.</p>
Full article ">Figure 5
<p>Classification matrices using DNN for 10-folder cross-validation. (<b>a</b>) Classification matrices of High-Low two-class classification in urban environments. (<b>b</b>) Classification matrices of High-Low two-class classification in motorway environments. (<b>c</b>) Classification matrices of High-Low two-class classification combined. (<b>d</b>) Classification matrices of four-class classification.</p>
Full article ">Figure 6
<p>Topographic maps of the accuracy values in each model.</p>
Full article ">Figure 7
<p>Topographic maps displaying median means of the Theta, Alpha, Beta, and Gamma bands across each section of experiment. (<b>a</b>) Theta band. (<b>b</b>) Alpha band. (<b>c</b>) Beta band. (<b>d</b>) Gamma band.</p>
Full article ">Figure 8
<p>Topographic maps displaying the variance in Theta, Alpha, Beta, and Gamma bands across each experiment section. (<b>a</b>) Theta band. (<b>b</b>) Alpha band. (<b>c</b>) Beta band. (<b>d</b>) Gamma band.</p>
Full article ">
13 pages, 2045 KiB  
Article
Electrocortical Dynamics of Usual Walking and the Planning to Step over Obstacles in Parkinson’s Disease
by Rodrigo Vitório, Ellen Lirani-Silva, Diego Orcioli-Silva, Victor Spiandor Beretta, Anderson Souza Oliveira and Lilian Teresa Bucken Gobbi
Sensors 2023, 23(10), 4866; https://doi.org/10.3390/s23104866 - 18 May 2023
Cited by 2 | Viewed by 2396
Abstract
The neural correlates of locomotion impairments observed in people with Parkinson’s disease (PD) are not fully understood. We investigated whether people with PD present distinct brain electrocortical activity during usual walking and the approach phase of obstacle avoidance when compared to healthy individuals. [...] Read more.
The neural correlates of locomotion impairments observed in people with Parkinson’s disease (PD) are not fully understood. We investigated whether people with PD present distinct brain electrocortical activity during usual walking and the approach phase of obstacle avoidance when compared to healthy individuals. Fifteen people with PD and fourteen older adults walked overground in two conditions: usual walking and obstacle crossing. Scalp electroencephalography (EEG) was recorded using a mobile 64-channel EEG system. Independent components were clustered using a k-means clustering algorithm. Outcome measures included absolute power in several frequency bands and alpha/beta ratio. During the usual walk, people with PD presented a greater alpha/beta ratio in the left sensorimotor cortex than healthy individuals. While approaching obstacles, both groups reduced alpha and beta power in the premotor and right sensorimotor cortices (balance demand) and increased gamma power in the primary visual cortex (visual demand). Only people with PD reduced alpha power and alpha/beta ratio in the left sensorimotor cortex when approaching obstacles. These findings suggest that PD affects the cortical control of usual walking, leading to a greater proportion of low-frequency (alpha) neuronal firing in the sensorimotor cortex. Moreover, the planning for obstacle avoidance changes the electrocortical dynamics associated with increased balance and visual demands. People with PD rely on increased sensorimotor integration to modulate locomotion. Full article
(This article belongs to the Special Issue EEG Signal Processing Techniques and Applications—2nd Edition)
Show Figures

Figure 1

Figure 1
<p><b>Top</b>: Topographic plots for the four clusters identified. <b>Bottom</b>: Bar graphs show alpha/beta ratio for people with PD (green) and healthy older adults (blue) in the usual walking and obstacle condition. Circles represent individual values. a indicates significant differences between usual walking and obstacle condition for people with PD; b indicates significant difference between people with PD and healthy older adults.</p>
Full article ">Figure 2
<p><b>Top:</b> Electrocortical clusters of independent components plotted on the MNI brain. Blue spheres represent independent components of healthy older adults and green spheres represent independent components of people with PD. Red spheres represent the centroid locations for the clusters ((<b>A</b>)—left sensorimotor cortex; (<b>B</b>)—right sensorimotor cortex). <b>Bottom:</b> Bar graphs show absolute power for theta, alpha, beta, and gamma bands in the usual walking and obstacle condition. Circles within the graphs represent individual values. a indicates significant difference between usual walking and obstacle condition for people with PD; c indicates main effect of condition.</p>
Full article ">Figure 3
<p><b>Top:</b> Electrocortical clusters of independent components plotted on the MNI brain. Blue spheres represent independent components of healthy older adults and green spheres represent independent components of people with PD. Red spheres represent the centroid locations for the clusters ((<b>A</b>)—middle premotor and supplementary motor area; (<b>B</b>)—visual cortex). <b>Bottom:</b> Bar graphs show absolute power for theta, alpha, beta, and gamma bands in the usual walking and obstacle condition. Circles within the graphs represent individual values. c indicates main effect of condition.</p>
Full article ">
29 pages, 2248 KiB  
Article
Method for Automatic Estimation of Instantaneous Frequency and Group Delay in Time–Frequency Distributions with Application in EEG Seizure Signals Analysis
by Vedran Jurdana, Miroslav Vrankic, Nikola Lopac and Guruprasad Madhale Jadav
Sensors 2023, 23(10), 4680; https://doi.org/10.3390/s23104680 - 11 May 2023
Cited by 3 | Viewed by 1914
Abstract
Instantaneous frequency (IF) is commonly used in the analysis of electroencephalogram (EEG) signals to detect oscillatory-type seizures. However, IF cannot be used to analyze seizures that appear as spikes. In this paper, we present a novel method for the automatic estimation of IF [...] Read more.
Instantaneous frequency (IF) is commonly used in the analysis of electroencephalogram (EEG) signals to detect oscillatory-type seizures. However, IF cannot be used to analyze seizures that appear as spikes. In this paper, we present a novel method for the automatic estimation of IF and group delay (GD) in order to detect seizures with both spike and oscillatory characteristics. Unlike previous methods that use IF alone, the proposed method utilizes information obtained from localized Rényi entropies (LREs) to generate a binary map that automatically identifies regions requiring a different estimation strategy. The method combines IF estimation algorithms for multicomponent signals with time and frequency support information to improve signal ridge estimation in the time–frequency distribution (TFD). Our experimental results indicate the superiority of the proposed combined IF and GD estimation approach over the IF estimation alone, without requiring any prior knowledge about the input signal. The LRE-based mean squared error and mean absolute error metrics showed improvements of up to 95.70% and 86.79%, respectively, for synthetic signals and up to 46.45% and 36.61% for real-life EEG seizure signals. Full article
(This article belongs to the Special Issue EEG Signal Processing Techniques and Applications—2nd Edition)
Show Figures

Figure 1

Figure 1
<p>For the considered signal <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mi>LFM</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>: (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>ρ</mi> <mrow> <mo>(</mo> <mi>l</mi> <mi>o</mi> <mo>)</mo> </mrow> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>,</mo> <mi>f</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>b</b>) the local number of signal components, <math display="inline"><semantics> <mrow> <mi>N</mi> <msub> <mi>C</mi> <mi>t</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>, (ideal—dashed red line; obtained—solid blue line) obtained from the STRE; and (<b>c</b>) the local number of signal components, <math display="inline"><semantics> <mrow> <mi>N</mi> <msub> <mi>C</mi> <mi>f</mi> </msub> <mrow> <mo>(</mo> <mi>f</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>, (ideal—dashed red line; obtained—solid blue line) obtained from the NBRE.</p>
Full article ">Figure 2
<p>For the considered signal <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mi>mix</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>: (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>ρ</mi> <mrow> <mo>(</mo> <mi>l</mi> <mi>o</mi> <mo>)</mo> </mrow> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>,</mo> <mi>f</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>b</b>) estimated IFs, <math display="inline"><semantics> <mrow> <msubsup> <mi>ρ</mi> <mrow> <mo>(</mo> <mi>l</mi> <mi>o</mi> <mo>)</mo> </mrow> <mi>t</mi> </msubsup> <mrow> <mo>(</mo> <mi>t</mi> <mo>,</mo> <mi>f</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; and (<b>c</b>) estimated GDs, <math display="inline"><semantics> <mrow> <msubsup> <mi>ρ</mi> <mrow> <mo>(</mo> <mi>l</mi> <mi>o</mi> <mo>)</mo> </mrow> <mi>f</mi> </msubsup> <mrow> <mo>(</mo> <mi>t</mi> <mo>,</mo> <mi>f</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>For the considered signal <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mi>mix</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>: (<b>a</b>) the local number of signal components, <math display="inline"><semantics> <mrow> <mi>N</mi> <msub> <mi>C</mi> <mi>f</mi> </msub> <mrow> <mo>(</mo> <mi>f</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>, obtained from the NBRE method; (<b>b</b>) LO-ADTFD. Red dashed lines mark the first segment <math display="inline"><semantics> <mrow> <mo>[</mo> <msub> <mi>f</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>f</mi> <mn>2</mn> </msub> <mo>]</mo> </mrow> </semantics></math> chosen from <math display="inline"><semantics> <mrow> <mi>N</mi> <msub> <mi>C</mi> <mi>f</mi> </msub> <mrow> <mo>(</mo> <mi>f</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> in which a significant increase in <math display="inline"><semantics> <mrow> <mi>N</mi> <msub> <mi>C</mi> <mi>f</mi> </msub> <mrow> <mo>(</mo> <mi>f</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> is detected.</p>
Full article ">Figure 4
<p>For the considered signal <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mi>mix</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>: (<b>a</b>) segmented LO-ADTFD; (<b>b</b>) the local number of signal components <math display="inline"><semantics> <mrow> <mi>N</mi> <msub> <mi>C</mi> <mi>t</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> calculated on segmented LO-ADTFD; and (<b>c</b>) <math display="inline"><semantics> <mrow> <msubsup> <mi>ρ</mi> <mrow> <mo>(</mo> <mi>l</mi> <mi>o</mi> <mo>)</mo> </mrow> <mi>t</mi> </msubsup> <mrow> <mo>(</mo> <mi>t</mi> <mo>,</mo> <mi>f</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>d</b>) <math display="inline"><semantics> <mrow> <msubsup> <mi>ρ</mi> <mrow> <mo>(</mo> <mi>l</mi> <mi>o</mi> <mo>)</mo> </mrow> <mi>f</mi> </msubsup> <mrow> <mo>(</mo> <mi>t</mi> <mo>,</mo> <mi>f</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>. Red dashed lines mark detected segments that are evaluated with <math display="inline"><semantics> <msub> <mi>N</mi> <mi>r</mi> </msub> </semantics></math> measure in <math display="inline"><semantics> <mrow> <msubsup> <mi>ρ</mi> <mrow> <mo>(</mo> <mi>l</mi> <mi>o</mi> <mo>)</mo> </mrow> <mi>t</mi> </msubsup> <mrow> <mo>(</mo> <mi>t</mi> <mo>,</mo> <mi>f</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msubsup> <mi>ρ</mi> <mrow> <mo>(</mo> <mi>l</mi> <mi>o</mi> <mo>)</mo> </mrow> <mi>f</mi> </msubsup> <mrow> <mo>(</mo> <mi>t</mi> <mo>,</mo> <mi>f</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>. Green dashed lines mark a segment that is considered to have components suitable for the current time localization approach.</p>
Full article ">Figure 5
<p>For the considered signal <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mi>mix</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>: (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>B</mi> <mi>M</mi> <mo>(</mo> <mi>t</mi> <mo>,</mo> <mi>f</mi> <mo>)</mo> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>B</mi> <mi>M</mi> <mo>(</mo> <mi>t</mi> <mo>,</mo> <mi>f</mi> <mo>)</mo> </mrow> </semantics></math> with LO-ADTFD. Yellow and dashed red rectangles point to the TF regions suitable for analysis using time slices, while the rest of the TFD in blue should be analyzed using frequency slices.</p>
Full article ">Figure 6
<p>For the considered signal <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mi>mix</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>: (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>κ</mi> <mi>t</mi> </msub> <mrow> <mo>{</mo> <msub> <mi>ρ</mi> <mrow> <mo>(</mo> <mi>l</mi> <mi>o</mi> <mo>)</mo> </mrow> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>,</mo> <mi>f</mi> <mo>)</mo> </mrow> <mo>}</mo> </mrow> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>κ</mi> <mi>f</mi> </msub> <mrow> <mo>{</mo> <msub> <mi>ρ</mi> <mrow> <mo>(</mo> <mi>l</mi> <mi>o</mi> <mo>)</mo> </mrow> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>,</mo> <mi>f</mi> <mo>)</mo> </mrow> <mo>}</mo> </mrow> </mrow> </semantics></math>; (<b>c</b>) the local number of signal components obtained by the STRE in starting TFD (red dashed line) and <math display="inline"><semantics> <mrow> <msub> <mi>κ</mi> <mi>t</mi> </msub> <mrow> <mo>{</mo> <msub> <mi>ρ</mi> <mrow> <mo>(</mo> <mi>l</mi> <mi>o</mi> <mo>)</mo> </mrow> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>,</mo> <mi>f</mi> <mo>)</mo> </mrow> <mo>}</mo> </mrow> </mrow> </semantics></math> (blue solid line); and (<b>d</b>) the local number of signal components obtained by the NBRE in starting TFD (red dashed line) and <math display="inline"><semantics> <mrow> <msub> <mi>κ</mi> <mi>f</mi> </msub> <mrow> <mo>{</mo> <msub> <mi>ρ</mi> <mrow> <mo>(</mo> <mi>l</mi> <mi>o</mi> <mo>)</mo> </mrow> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>,</mo> <mi>f</mi> <mo>)</mo> </mrow> <mo>}</mo> </mrow> </mrow> </semantics></math> (blue solid line).</p>
Full article ">Figure 7
<p>Simplified flowchart for the automatic IF and GD estimation for a given TFD.</p>
Full article ">Figure 8
<p>For the considered signal <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mi>LFM</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>: (<b>a</b>) estimated IFs, <math display="inline"><semantics> <mrow> <msubsup> <mi>ρ</mi> <mrow> <mo>(</mo> <mi>l</mi> <mi>o</mi> <mo>)</mo> </mrow> <mi>t</mi> </msubsup> <mrow> <mo>(</mo> <mi>t</mi> <mo>,</mo> <mi>f</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>b</b>) estimated GDs, <math display="inline"><semantics> <mrow> <msubsup> <mi>ρ</mi> <mrow> <mo>(</mo> <mi>l</mi> <mi>o</mi> <mo>)</mo> </mrow> <mi>f</mi> </msubsup> <mrow> <mo>(</mo> <mi>t</mi> <mo>,</mo> <mi>f</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>B</mi> <mi>M</mi> <mo>(</mo> <mi>t</mi> <mo>,</mo> <mi>f</mi> <mo>)</mo> </mrow> </semantics></math>; and (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>B</mi> <mi>M</mi> <mo>(</mo> <mi>t</mi> <mo>,</mo> <mi>f</mi> <mo>)</mo> </mrow> </semantics></math> with LO-ADTFD. Yellow and dashed red rectangles point to the TF regions suitable for analysis using time slices, while the rest of the TFD in blue should be analyzed using frequency slices.</p>
Full article ">Figure 9
<p>Estimated IFs and GDs for the signal <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mi>LFM</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> in AWGN with SNR <math display="inline"><semantics> <mrow> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math> dB using (<b>a</b>) the image-based STRE method; (<b>b</b>) the image-based STRE-NBRE method; (<b>c</b>) the BSS-STRE method; and (<b>d</b>) the BSS-STRE-NBRE method.</p>
Full article ">Figure 10
<p>Estimated IFs and GDs for the signal <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mi>mix</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> using (<b>a</b>) the image-based STRE method; (<b>b</b>) the image-based STRE-NBRE method; (<b>c</b>) the BSS-STRE method; and (<b>d</b>) the BSS-STRE-NBRE method.</p>
Full article ">Figure 11
<p>Estimated IFs and GDs obtained in <math display="inline"><semantics> <mrow> <msup> <mi>B</mi> <mrow> <mo>(</mo> <mi>shrink</mi> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <mi>t</mi> <mo>,</mo> <mi>f</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> using the shrinkage operator for the signals: (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mi>LFM</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mi>mix</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 12
<p>MSE between the local number of signal components estimated from noise-free and noisy LO-ADTFDs in AWGN with SNR <math display="inline"><semantics> <mrow> <mo>=</mo> <mo>[</mo> <mn>0</mn> <mo>,</mo> <mn>10</mn> <mo>]</mo> </mrow> </semantics></math> dB for the considered signals: (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mi>LFM</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mi>mix</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 13
<p>F1 values for evaluating the shrinkage-operator-based (blue line), BSS-STRE-NBRE (red line) and image-based STRE-NBRE (green line) IF/GD estimation algorithms’ sensitivity to AWGN in SNR <math display="inline"><semantics> <mrow> <mo>=</mo> <mo>[</mo> <mn>0</mn> <mo>,</mo> <mn>10</mn> <mo>]</mo> </mrow> </semantics></math> dB for the considered signals: (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mi>LFM</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mi>mix</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 14
<p>F-norm values for evaluating the shrinkage-operator-based (blue line), BSS-STRE-NBRE (red line) and image-based STRE-NBRE methods’ (green line) IF/GD estimation algorithms’ sensitivity to AWGN in SNR <math display="inline"><semantics> <mrow> <mo>=</mo> <mo>[</mo> <mn>0</mn> <mo>,</mo> <mn>10</mn> <mo>]</mo> </mrow> </semantics></math> dB for the considered signals: (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mi>LFM</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mi>mix</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 15
<p>(<b>a</b>) EEG seizure signal considered in this study, <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mi>EEG</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>, represented in time domain; (<b>b</b>) LO-ADTFD of the signal <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mi>EEG</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; and (<b>c</b>) LO-ADTFD of the signal <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <msub> <mi>EEG</mi> <mi>filt</mi> </msub> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 16
<p>(<b>a</b>) <math display="inline"><semantics> <mrow> <mi>B</mi> <mi>M</mi> <mo>(</mo> <mi>t</mi> <mo>,</mo> <mi>f</mi> <mo>)</mo> </mrow> </semantics></math> for the signal <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mi>EEG</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>B</mi> <mi>M</mi> <mo>(</mo> <mi>t</mi> <mo>,</mo> <mi>f</mi> <mo>)</mo> </mrow> </semantics></math> with LO-ADTFD for the signal <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mi>EEG</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>B</mi> <mi>M</mi> <mo>(</mo> <mi>t</mi> <mo>,</mo> <mi>f</mi> <mo>)</mo> </mrow> </semantics></math> for the signal <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <msub> <mi>EEG</mi> <mi>filt</mi> </msub> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; and (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>B</mi> <mi>M</mi> <mo>(</mo> <mi>t</mi> <mo>,</mo> <mi>f</mi> <mo>)</mo> </mrow> </semantics></math> with LO-ADTFD for the signal <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <msub> <mi>EEG</mi> <mi>filt</mi> </msub> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 17
<p>Extracted components with (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>κ</mi> <mi>t</mi> </msub> <mrow> <mo>{</mo> <msub> <mi>ρ</mi> <mrow> <mo>(</mo> <mi>l</mi> <mi>o</mi> <mo>)</mo> </mrow> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>,</mo> <mi>f</mi> <mo>)</mo> </mrow> <mo>}</mo> </mrow> </mrow> </semantics></math> for the signal <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mi>EEG</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>κ</mi> <mi>f</mi> </msub> <mrow> <mo>{</mo> <msub> <mi>ρ</mi> <mrow> <mo>(</mo> <mi>l</mi> <mi>o</mi> <mo>)</mo> </mrow> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>,</mo> <mi>f</mi> <mo>)</mo> </mrow> <mo>}</mo> </mrow> </mrow> </semantics></math> for the signal <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mi>EEG</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mi>κ</mi> <mi>t</mi> </msub> <mrow> <mo>{</mo> <msub> <mi>ρ</mi> <mrow> <mo>(</mo> <mi>l</mi> <mi>o</mi> <mo>)</mo> </mrow> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>,</mo> <mi>f</mi> <mo>)</mo> </mrow> <mo>}</mo> </mrow> </mrow> </semantics></math> for the signal <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <msub> <mi>EEG</mi> <mi>filt</mi> </msub> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; and (<b>d</b>) <math display="inline"><semantics> <mrow> <msub> <mi>κ</mi> <mi>f</mi> </msub> <mrow> <mo>{</mo> <msub> <mi>ρ</mi> <mrow> <mo>(</mo> <mi>l</mi> <mi>o</mi> <mo>)</mo> </mrow> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>,</mo> <mi>f</mi> <mo>)</mo> </mrow> <mo>}</mo> </mrow> </mrow> </semantics></math> for the signal <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <msub> <mi>EEG</mi> <mi>filt</mi> </msub> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 18
<p>Comparison between the local number of signal components obtained by STRE and NBRE in starting TFD (dashed red line) and from extracted components using the proposed operators <math display="inline"><semantics> <msub> <mi>κ</mi> <mi>t</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>κ</mi> <mi>f</mi> </msub> </semantics></math> (solid blue line) for the signals: (<b>a</b>,<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mi>EEG</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>c</b>,<b>d</b>) <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <msub> <mi>EEG</mi> <mi>filt</mi> </msub> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 19
<p>Using the method proposed in [<a href="#B47-sensors-23-04680" class="html-bibr">47</a>]: (<b>a</b>) estimated IFs of the signal <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <msub> <mi>EEG</mi> <mi>filt</mi> </msub> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>b</b>) estimated GDs of the signal <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <msub> <mi>EEG</mi> <mi>filt</mi> </msub> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>c</b>) estimated IFs of the signal <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mi>EEG</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; and (<b>d</b>) estimated GDs of the signal <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mi>EEG</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 20
<p>Estimated IFs and GDs using (<b>a</b>) the image-based STRE method for the signal <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mi>EEG</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>b</b>) the image-based STRE-NBRE method for the signal <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mi>EEG</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>c</b>) the BSS-STRE method for the signal <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mi>EEG</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>d</b>) the BSS-STRE-NBRE method for the signal <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mi>EEG</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>e</b>) the image-based STRE method for the signal <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <msub> <mi>EEG</mi> <mi>filt</mi> </msub> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>f</b>) the image-based STRE-NBRE method for the signal <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <msub> <mi>EEG</mi> <mi>filt</mi> </msub> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>g</b>) the BSS-STRE method for the signal <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <msub> <mi>EEG</mi> <mi>filt</mi> </msub> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; and (<b>h</b>) the BSS-STRE-NBRE method for the signal <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <msub> <mi>EEG</mi> <mi>filt</mi> </msub> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 21
<p>Estimated IFs and GDs obtained in <math display="inline"><semantics> <mrow> <msup> <mi>B</mi> <mrow> <mo>(</mo> <mi>shrink</mi> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <mi>t</mi> <mo>,</mo> <mi>f</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> using the shrinkage operator for the signals: (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mi>EEG</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <msub> <mi>EEG</mi> <mi>filt</mi> </msub> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">
20 pages, 1200 KiB  
Article
Motor Imagery Multi-Tasks Classification for BCIs Using the NVIDIA Jetson TX2 Board and the EEGNet Network
by Tat’y Mwata-Velu, Edson Niyonsaba-Sebigunda, Juan Gabriel Avina-Cervantes, Jose Ruiz-Pinales, Narcisse Velu-A-Gulenga and Adán Antonio Alonso-Ramírez
Sensors 2023, 23(8), 4164; https://doi.org/10.3390/s23084164 - 21 Apr 2023
Cited by 2 | Viewed by 2570
Abstract
Nowadays, Brain–Computer Interfaces (BCIs) still captivate large interest because of multiple advantages offered in numerous domains, explicitly assisting people with motor disabilities in communicating with the surrounding environment. However, challenges of portability, instantaneous processing time, and accurate data processing remain for numerous BCI [...] Read more.
Nowadays, Brain–Computer Interfaces (BCIs) still captivate large interest because of multiple advantages offered in numerous domains, explicitly assisting people with motor disabilities in communicating with the surrounding environment. However, challenges of portability, instantaneous processing time, and accurate data processing remain for numerous BCI system setups. This work implements an embedded multi-tasks classifier based on motor imagery using the EEGNet network integrated into the NVIDIA Jetson TX2 card. Therefore, two strategies are developed to select the most discriminant channels. The former uses the accuracy based-classifier criterion, while the latter evaluates electrode mutual information to form discriminant channel subsets. Next, the EEGNet network is implemented to classify discriminant channel signals. Additionally, a cyclic learning algorithm is implemented at the software level to accelerate the model learning convergence and fully profit from the NJT2 hardware resources. Finally, motor imagery Electroencephalogram (EEG) signals provided by HaLT’s public benchmark were used, in addition to the k-fold cross-validation method. Average accuracies of 83.7% and 81.3% were achieved by classifying EEG signals per subject and motor imagery task, respectively. Each task was processed with an average latency of 48.7 ms. This framework offers an alternative for online EEG-BCI systems’ requirements, dealing with short processing times and reliable classification accuracy. Full article
(This article belongs to the Special Issue EEG Signal Processing Techniques and Applications—2nd Edition)
Show Figures

Figure 1

Figure 1
<p>The proposed method overall flowchart. EEG signals of six MI tasks are provided by [<a href="#B40-sensors-23-04164" class="html-bibr">40</a>]. The red rectangle centered on the circle refers to “Passive” and moves according to the subject’s MI task. The first step consists of selecting discriminant channels from the 19 provided. Next, two comparative methods are used: the ARbC method and the CMIbA. Therefore, the EEGNet network classifies the feature signals into six classes to give the output.</p>
Full article ">Figure 2
<p>Channels’ spatial location on the skull in making the referred dataset. According to the 10–20 system, uppercase letters define the brain cortex where an electrode is placed. F for Frontal, T for temporal, P for parietal, and O for occipital cortex. The lowercase ”z” is utilized to locate electrodes on the skull’s longitudinal axis. A1 and A2 mean left and right reference voltage electrodes, respectively.</p>
Full article ">Figure 3
<p>Overview of the EEG acquisition and processing in the experimental paradigm. The red rectangle on the eGUI moves over the specific limb icon as a visual stimulus to engage the respective mental task of imagined movement. MI–EEG signals from six mental states were recorded by EEG-1200 equipment and processed using Neurofax recording software [<a href="#B40-sensors-23-04164" class="html-bibr">40</a>]. In addition, ASCII data were converted into Matlab files for further processing.</p>
Full article ">Figure 4
<p>The encapsulated EEGNet structure. EEG signals were organized by subject, channel, and sample length. This data matrix was expanded to four dimensions fulfilling the EEGNet input matrix dimension. In Part (a), temporal features are extracted by Conv2D, and in Part (b), spatial filters are applied to enhance feature maps. Then, feature maps are combined in Separable Conv2D (Part (c)), providing the output class probability (Part (d)).</p>
Full article ">Figure 5
<p>t-SNE distribution illustrations of selected channels’ signals for all subjects using the ARbC method and CMIbA before the main processing step. All figures were plotted in 2-D embedded space using the Euclidean metric, setting the nearest neighbors’ number at 10, the number of iterations for the optimization at 1000, and the gradient norm at 0.0001. (<b>a</b>) ARbC: six-channel combination: distribution of {Fp1,F8,Fp2,F7,P3,Cz} channel signals, (<b>b</b>) CMIbA: six-channel combination: distribution of {P4,T6,T3,P3,F4,O2} channel signals, (<b>c</b>) ARbC: eight-channel combination: distribution of {Fp1,F8,Fp2,F7,P3,Cz,O1,P4} channel signals, (<b>d</b>) CMIbA: eight-channel combination: distribution of {P4,T6,T3,P3,F4,O2,Fp2,Fz} channel signals.</p>
Full article ">Figure 6
<p>Illustration of MI–EEG features before and after the classification using the EEGNet network. In this example, subject J’s data are provided by the Fp1 channel. The window was set to 170 samples, corresponding to task duration. The normalized magnitude is given in <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>V, while SPS and <math display="inline"><semantics> <mi>ρ</mi> </semantics></math> mean the number of samples per second and feature magnitude, respectively. (<b>a</b>) MI–EEG signals before classification. (<b>b</b>) MI–EEG features after classification.</p>
Full article ">
Back to TopTop