EEG-Based Classification of Spoken Words Using Machine Learning Approaches
<p>Photograph of the experimental setup. The participant is seated in front of the monitor with the EEG cap. The monitor shows the word <span class="html-italic">comida</span>, corresponding to the pronunciation stage.</p> "> Figure 2
<p>Graphical illustration of the temporal sequence of a trial. The trial starts with 3 s of attention (fixation cross), then for 3 s, the participant is shown one of the five words that will have to be pronounced only once, and finally 3 s of rest (palm). In addition, the lower part of the image shows the data segment from −1.5 s to 1.5 s, corresponding to the time window of interest.</p> "> Figure 3
<p>Architecture of the convolutional neural network EEGNet.</p> "> Figure 4
<p>Distribution of classification accuracy, recall per class, and precision per class obtained with the PSD + SVM and EEGNet methods in the classification scenario <span class="html-italic">Attention</span> vs. <span class="html-italic">Pronunciation</span>. The diamonds represent outliers. In all performance metrics, no significant differences were found between the two classification methods (Wilcoxon signed-rank test, <span class="html-italic">p</span> > 0.01).</p> "> Figure 5
<p>Distribution of classification accuracy, recall per class, and precision per class obtained with the PSD + SVM and EEGNet methods in the classification scenario <span class="html-italic">Short words</span> vs. <span class="html-italic">Long words</span>. In all performance metrics, significant differences were found between the two classification methods (Wilcoxon signed-rank test, <span class="html-italic">p</span> < 0.01).</p> "> Figure 6
<p>Distribution of classification accuracy for each word pair obtained with the PSD + SVM and EEGNet methods in the classification scenario <span class="html-italic">Word</span> vs. <span class="html-italic">Word</span>. The diamonds represent outliers. In eight out of the ten word pair classifications, significant differences were found between the two classification methods (Wilcoxon signed-rank test, <span class="html-italic">p</span> < 0.01).</p> "> Figure 7
<p>Distribution of classification accuracy, recall per class, and precision per class obtained with the PSD + SVM and EEGNet methods in the classification scenario <span class="html-italic">multiclass</span>. The diamonds represent outliers. In nine of the eleven computed performance metrics, significant differences were found between the two classification methods (Wilcoxon signed-rank test, <span class="html-italic">p</span> < 0.01): <math display="inline"><semantics> <mrow> <mi>c</mi> <mi>l</mi> <mi>a</mi> <mi>s</mi> <msub> <mi>s</mi> <mn>1</mn> </msub> </mrow> </semantics></math> = <span class="html-italic">si</span>, <math display="inline"><semantics> <mrow> <mi>c</mi> <mi>l</mi> <mi>a</mi> <mi>s</mi> <msub> <mi>s</mi> <mn>2</mn> </msub> </mrow> </semantics></math> = <span class="html-italic">no</span>, <math display="inline"><semantics> <mrow> <mi>c</mi> <mi>l</mi> <mi>a</mi> <mi>s</mi> <msub> <mi>s</mi> <mn>3</mn> </msub> </mrow> </semantics></math> = <span class="html-italic">agua</span>, <math display="inline"><semantics> <mrow> <mi>c</mi> <mi>l</mi> <mi>a</mi> <mi>s</mi> <msub> <mi>s</mi> <mn>4</mn> </msub> </mrow> </semantics></math> = <span class="html-italic">comida</span>, and <math display="inline"><semantics> <mrow> <mi>c</mi> <mi>l</mi> <mi>a</mi> <mi>s</mi> <msub> <mi>s</mi> <mn>5</mn> </msub> </mrow> </semantics></math> = <span class="html-italic">dormir</span>.</p> "> Figure A1
<p>Accuracy results per participant by each method in classifying the attention vs. speech segment. The dotted black line corresponds to the level of chance (50%).</p> "> Figure A2
<p>Accuracy results per participant by each method in classifying short words vs. long words. The dotted black line corresponds to the level of chance (50%).</p> "> Figure A3
<p>Accuracy results per participant by each method in multiclass classification. The dotted black line corresponds to the level of chance (20%).</p> ">
Abstract
:1. Introduction
2. Materials and Methods
2.1. EEG Data Recording Experiment
2.2. EEG Data Processing and Preparation
2.3. Classification Scenarios
- Attention vs. Pronunciation. The objective in this scenario was to discriminate between no pronunciation at all and pronounced words regardless of the words. For the first class, EEG signals in the time segment from −1.5 s to 0 s were used, while for the second class, EEG signals in the time segment from 0 to 1.5 s were used. In a participant where no trials were removed, the number of trials in this scenario was 200 for each class. This classification scenario is significant because of the ability to discriminate between segments where pronunciation is present and where it is not. It serves as the initial step in developing a system that decodes attempted speech in patients with speech impairments.
- Short words vs. Long words. In this scenario, we studied the classification between two groups of words according to their length: short and long. The short word group corresponded to the words of one syllable, si and no, while the long word group corresponded to two-syllable words, agua and dormir. In a participant where no trials were removed, the number of trials in this scenario was 80 for each class. The aim was to determine if it is possible to distinguish between short and long words since, in this case, the short words correspond to answers to binary questions.
- Word vs. Word. Here, we investigated the classification between all possible pairs of words (ten different bi-class situations, since we have 5 words). The objective was to determine which pairs of words can and cannot be discriminated from the EEG signals. In a participant where no trials were removed, the number of trials in this scenario was 40 for each class.
- All words. The objective in this scenario was to study the recognition between the five words; therefore, this is a multiclass classification with five classes. In a participant where no trials were removed, the number of trials in this scenario was 40 for each class.
2.4. Classification Methods
2.4.1. Feature Extraction and Classifier
2.4.2. EEGNet
Network Architecture and Hyperparameters
Network Training and Classification
2.5. Validation Procedure and Performance Metrics
- i.
- The total classification accuracy represents the fraction of correctly classified instances in the test set [45], and it is defined as follows:
- ii.
- Recall by class or is the number of elements correctly classified in that class divided by the total number of elements that should have been classified in that class, that is, the performance of the model when detecting that class. This is defined as follows:
- iii.
- Precision by class or is the number of elements correctly classified in that class divided by the total number of elements classified as belonging to that class. It indicates how reliable our model is in predicting a specific class. It is defined as follows:
3. Results
3.1. Bi-Class: Attention vs. Pronunciation
3.2. Bi-Class: Short Words vs. Long Words
3.3. Bi-Class: Word vs. Word
3.4. Multiclass: All Words
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
ALS | Amyotrophic lateral sclerosis |
ANN | Artificial neural network |
BCI | Brain–computer interface |
EEG | Electroencephalography |
MI | Motor imagery |
PSD | Power spectral density |
SSVEP | Steady-state visually evoked potential |
SVM | Support vector machine |
Appendix A. ICA Implementation
Parameter | Value |
---|---|
numcomponent | Number of channels |
pca | No dimensionality reduction performed |
approach | ‘extended’ |
g | ‘tanh’ |
stopping | |
maxsteps | 512 |
lrate | ‘auto’ |
weights | Initialized based on data |
sphere | Initialized based on data |
Appendix B. Classification Results per Participant
Appendix B.1. Bi-Class: Attention vs. Pronunciation
Appendix B.2. Bi-Class: Short Words vs. Long Words
Appendix B.3. Multiclass: All Words
References
- Cuetos, F. Neurociencia del Lenguaje: Bases Neurológicas e Implicaciones Clínicas, 1st ed.; Editorial Médica Panamericana: Madrid, España, 2012; p. 111. [Google Scholar]
- Ladefoged, P.; Johnson, K. A Course in Phonetics, 7th ed.; Cengage Learning: Boston, MA, USA, 2014; pp. 2–5. [Google Scholar]
- Lee, J.; Madhavan, A.; Krajewski, E.; Lingenfelter, S. Assessment of dysarthria and dysphagia in patients with amyotrophic lateral sclerosis: Review of the current evidence. Muscle Nerve 2021, 64, 520–531. [Google Scholar] [CrossRef]
- Caligari, M.; Godi, M.; Guglielmetti, S.; Franchignoni, F.; Nardone, A. Eye tracking communication devices in amyotrophic lateral sclerosis: Impact on disability and quality of life. Amyotroph. Lateral Scler. Front. Degener. 2013, 14, 546–552. [Google Scholar] [CrossRef]
- Wolpaw, J.R. Brain–computer interfaces. Handb. Clin. Neurol. 2013, 110, 67–74. [Google Scholar]
- Zhao, Y.; Chen, Y.; Cheng, K.; Huang, W. Artificial Intelligence Based Multimodal Language Decoding from Brain Activity: A Review. Brain Res. Bull. 2023, 201, 110713. [Google Scholar] [CrossRef]
- Lotte, F.; Congedo, M.; Lécuyer, A.; Lamarche, F.; Arnaldi, B. A review of classification algorithms for EEG-based brain–computer interfaces. J. Neural Eng. 2007, 4, R1. [Google Scholar] [CrossRef]
- Aggarwal, S.; Chugh, N. Review of machine learning techniques for EEG based brain computer interface. Arch. Comput. Methods Eng. 2022, 29, 3001–3020. [Google Scholar] [CrossRef]
- Hernández-Rojas, L.G.; Montoya, O.M.; Antelis, J.M. Anticipatory detection of self-paced rehabilitative movements in the same upper limb from EEG signals. IEEE Access 2020, 8, 119728–119743. [Google Scholar] [CrossRef]
- Rezeika, A.; Benda, M.; Stawicki, P.; Gembler, F.; Saboor, A.; Volosyak, I. Brain–computer interface spellers: A review. Brain Sci. 2018, 8, 57. [Google Scholar] [CrossRef]
- Delijorge, J.; Mendoza-Montoya, O.; Gordillo, J.L.; Caraza, R.; Martinez, H.R.; Antelis, J.M. Evaluation of a p300-based brain-machine interface for a robotic hand-orthosis control. Front. Neurosci. 2020, 14, 589659. [Google Scholar] [CrossRef]
- Hernandez-Rojas, L.G.; Cantillo-Negrete, J.; Mendoza-Montoya, O.; Carino-Escobar, R.I.; Leyva-Martinez, I.; Aguirre-Guemez, A.V.; Barrera-Ortiz, A.; Carrillo-Mora, P.; Antelis, J.M. Brain-computer interface controlled functional electrical stimulation: Evaluation with healthy subjects and spinal cord injury patients. IEEE Access 2022, 10, 46834–46852. [Google Scholar] [CrossRef]
- Värbu, K.; Muhammad, N.; Muhammad, Y. Past, present, and future of EEG-based BCI applications. Sensors 2022, 22, 3331. [Google Scholar] [CrossRef]
- Hekmatmanesh, A.; Azni, H.M.; Wu, H.; Afsharchi, M.; Li, M.; Handroos, H. Imaginary control of a mobile vehicle using deep learning algorithm: A brain computer interface study. IEEE Access 2021, 10, 20043–20052. [Google Scholar] [CrossRef]
- Hekmatmanesh, A.; Wu, H.; Handroos, H. Largest Lyapunov Exponent Optimization for Control of a Bionic-Hand: A Brain Computer Interface Study. Front. Rehabil. Sci. 2022, 2, 802070. [Google Scholar] [CrossRef]
- Kundu, S.; Ari, S. Brain-computer interface speller system for alternative communication: A review. IRBM 2022, 43, 317–324. [Google Scholar] [CrossRef]
- Mannan, M.M.N.; Kamran, M.A.; Kang, S.; Choi, H.S.; Jeong, M.Y. A hybrid speller design using eye tracking and SSVEP brain–computer interface. Sensors 2020, 20, 891. [Google Scholar] [CrossRef]
- Nieto, N.; Peterson, V.; Rufiner, H.L.; Kamienkowski, J.E.; Spies, R. Thinking out loud, an open-access EEG-based BCI dataset for inner speech recognition. Sci. Data 2022, 9, 52. [Google Scholar] [CrossRef]
- Moses, D.A.; Metzger, S.L.; Liu, J.R.; Anumanchipalli, G.K.; Makin, J.G.; Sun, P.F.; Chartier, J.; Dougherty, M.E.; Liu, P.M.; Abrams, G.M.; et al. Neuroprosthesis for decoding speech in a paralyzed person with anarthria. N. Engl. J. Med. 2021, 385, 217–227. [Google Scholar] [CrossRef]
- Anumanchipalli, G.K.; Chartier, J.; Chang, E.F. Speech synthesis from neural decoding of spoken sentences. Nature 2019, 568, 493–498. [Google Scholar] [CrossRef]
- Angrick, M.; Ottenhoff, M.C.; Diener, L.; Ivucic, D.; Ivucic, G.; Goulis, S.; Saal, J.; Colon, A.J.; Wagner, L.; Krusienski, D.J.; et al. Real-time synthesis of imagined speech processes from minimally invasive recordings of neural activity. Commun. Biol. 2021, 4, 1055. [Google Scholar] [CrossRef]
- Correia, J.M.; Jansma, B.; Hausfeld, L.; Kikkert, S.; Bonte, M. EEG decoding of spoken words in bilingual listeners: From words to language invariant semantic-conceptual representations. Front. Psychol. 2015, 6, 71. [Google Scholar] [CrossRef]
- McMurray, B.; Sarrett, M.E.; Chiu, S.; Black, A.K.; Wang, A.; Canale, R.; Aslin, R.N. Decoding the temporal dynamics of spoken word and nonword processing from EEG. NeuroImage 2022, 260, 119457. [Google Scholar] [CrossRef] [PubMed]
- Vorontsova, D.; Menshikov, I.; Zubov, A.; Orlov, K.; Rikunov, P.; Zvereva, E.; Flitman, L.; Lanikin, A.; Sokolova, A.; Markov, S.; et al. Silent EEG-speech recognition using convolutional and recurrent neural network with 85% accuracy of 9 words classification. Sensors 2021, 21, 6744. [Google Scholar] [CrossRef]
- Rojas, S.J.B.; Ramírez-Valencia, R.; Alonso-Vázquez, D.; Caraza, R.; Martinez, H.R.; Mendoza-Montoya, O.; Antelis, J.M. Recognition of grammatical classes of overt speech using electrophysiological signals and machine learning. In Proceedings of the 2022 IEEE 4th International Conference on BioInspired Processing (BIP), Cartago, Costa Rica, 15–17 November 2022; pp. 1–6. [Google Scholar]
- Datta, S.; Boulgouris, N.V. Recognition of grammatical class of imagined words from EEG signals using convolutional neural network. Neurocomputing 2021, 465, 301–309. [Google Scholar] [CrossRef]
- Sarmiento, L.C.; Villamizar, S.; López, O.; Collazos, A.C.; Sarmiento, J.; Rodríguez, J.B. Recognition of EEG signals from imagined vowels using deep learning methods. Sensors 2021, 21, 6503. [Google Scholar] [CrossRef] [PubMed]
- Nguyen, C.H.; Karavas, G.K.; Artemiadis, P. Inferring imagined speech using EEG signals: A new approach using Riemannian manifold features. J. Neural Eng. 2017, 15, 016002. [Google Scholar] [CrossRef]
- Cooney, C.; Korik, A.; Raffaella, F.; Coyle, D. Classification of imagined spoken word-pairs using convolutional neural networks. Proceedings of The 8th Graz BCI Conference, Graz, Austria, 19–20 September 2019; pp. 338–343. [Google Scholar]
- Cooney, C.; Korik, A.; Folli, R.; Coyle, D. Evaluation of hyperparameter optimization in machine and deep learning methods for decoding imagined speech EEG. Sensors 2020, 20, 4629. [Google Scholar] [CrossRef]
- Agarwal, P.; Kumar, S. Electroencephalography-based imagined speech recognition using deep long short-term memory network. ETRI J. 2022, 44, 672–685. [Google Scholar] [CrossRef]
- Abdulghani, M.M.; Walters, W.L.; Abed, K.H. Imagined Speech Classification Using EEG and Deep Learning. Bioengineering 2023, 10, 649. [Google Scholar] [CrossRef]
- García-Salinas, J.S.; Torres-García, A.A.; Reyes-Garćia, C.A.; Villaseñor-Pineda, L. Intra-subject class-incremental deep learning approach for EEG-based imagined speech recognition. Biomed. Signal Process. Control. 2023, 81, 104433. [Google Scholar] [CrossRef]
- Fedorenko, E.; Thompson-Schill, S.L. Reworking the language network. Trends Cogn. Sci. 2014, 18, 120–126. [Google Scholar] [CrossRef]
- Koizumi, K.; Ueda, K.; Nakao, M. Development of a Cognitive Brain-Machine Interface Based on a Visual Imagery Method. In Proceedings of the 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, USA, 18–21 July 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1062–1065. [Google Scholar]
- Babadi, B.; Brown, E.N. A review of multitaper spectral analysis. IEEE Trans. Biomed. Eng. 2014, 61, 1555–1564. [Google Scholar] [CrossRef] [PubMed]
- Mitra, P.P.; Pesaran, B. Analysis of dynamic brain imaging data. Biophys. J. 1999, 76, 691–708. [Google Scholar] [CrossRef] [PubMed]
- James, G.; Witten, D.; Hastie, T.; Tibshirani, R. An Introduction to Statistical Learning with Applications in R, 2nd ed.; Springer: New York, NY, USA, 2013; pp. 68–78. [Google Scholar]
- Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
- Guler, I.; Ubeyli, E.D. Multiclass support vector machines for EEG-signals classification. IEEE Trans. Inf. Technol. Biomed. 2007, 11, 117–126. [Google Scholar] [CrossRef]
- Panachakel, J.T.; Ramakrishnan, A.G. Decoding covert speech from EEG-a comprehensive review. Front. Neurosci. 2021, 15, 392. [Google Scholar] [CrossRef]
- Lawhern, V.J.; Solon, A.J.; Waytowich, N.R.; Gordon, S.M.; Hung, C.P.; Lance, B.J. EEGNet: A compact convolutional neural network for EEG-based brain-computer interfaces. J. Neural Eng. 2018, 15, 056013. [Google Scholar] [CrossRef]
- Ang, K.K.; Chin, Z.Y.; Zhang, H.; Guan, C. Filter bank common spatial pattern (FBCSP) in brain-computer interface. In Proceedings of the 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence), Hong Kong, China, 1–8 June 2008; pp. 2390–2397. [Google Scholar]
- Hastie, T.; Tibshirani, R.; Friedman, J.H. The Elements of Statistical Learning Data Mining, Inference, and Prediction, 2nd ed.; Springer: New York, NY, USA, 2009; pp. 392–395. [Google Scholar]
- Japkowicz, N.; Shah, M. Evaluating Learning Algorithms: A Classification Perspective, 1st ed.; Cambridge University Press: New York, NY, USA, 2011; pp. 85–86. [Google Scholar]
- Demšar, J. Statistical comparisons of classifiers over multiple data sets. J. Mach. Learn. Res. 2006, 7, 1–30. [Google Scholar]
- Sereshkeh, A.R.; Trott, R.; Bricout, A.; Chau, T. EEG classification of covert speech using regularized neural networks. IEEE/ACM Trans. Audio Speech Lang. Process. 2017, 25, 2292–2300. [Google Scholar] [CrossRef]
- Masrori, P.; Van Damme, P. Amyotrophic lateral sclerosis: A clinical review. Eur. J. Neurol. 2020, 27, 1918–1929. [Google Scholar] [CrossRef]
- Goncharova, I.I.; McFarland, D.J.; Vaughan, T.M.; Wolpaw, J.R. EMG contamination of EEG: Spectral and topographical characteristics. Clin. Neurophysiol. 2003, 9, 1580–1593. [Google Scholar] [CrossRef]
Method | Accuracy | Recall | Recall | Precision | Precision |
---|---|---|---|---|---|
PSD + SVM | 91.04 ± 5.82% | 91.30 ± 6.42% | 90.79 ± 5.64% | 90.82 ± 5.75% | 91.34 ± 6.10% |
EEGNet | 90.69 ± 5.21% | 89.80 ± 7.16% | 91.59 ± 5.69% | 91.58 ± 5.24% | 90.26 ± 6.35% |
Method | Accuracy | Recall | Recall | Precision | Precision |
---|---|---|---|---|---|
PSD + SVM | 65.72 ± 9.56% | 68.23 ± 9.58% | 63.02 ± 10.92% | 65.43 ± 9.27% | 66.01 ± 10.02% |
EEGNet | 73.91 ± 10.04% | 75.05 ± 9.57% | 72.75 ± 12.57% | 74.21 ± 10.88% | 74.00 ± 10.03% |
Method | Accuracy | Recall | Recall | Precision | Precision |
---|---|---|---|---|---|
PSD + SVM | 64.56 ± 11.69% | 65.12 ± 12.59% | 63.97 ± 13.31% | 64.95 ± 11.60% | 64.37 ± 12.11% |
EEGNet | 76.61 ± 10.78% | 77.18 ± 11.44% | 76.01 ± 13.00% | 77.07 ± 11.18% | 76.71 ± 11.25% |
PSD + SVM | 73.13 ± 11.43% | 75.10 ± 11.46% | 70.95 ± 13.12% | 73.26 ± 11.72% | 73.31 ± 11.60% |
EEGNet | 81.23 ± 10.47% | 81.96 ± 10.07% | 80.26 ± 14.20% | 82.26 ± 10.84% | 80.61 ± 11.46% |
PSD + SVM | 68.32 ± 11.80% | 69.11 ± 11.60% | 67.60 ± 13.49% | 68.24 ± 12.53% | 68.63 ± 11.56% |
EEGNet | 76.24 ± 11.42% | 76.37 ± 12.35% | 76.06 ± 12.99% | 76.38 ± 11.96% | 76.69 ± 11.90% |
PSD + SVM | 73.85 ± 11.29% | 75.36 ± 10.91% | 72.08 ± 14.19% | 74.47 ± 11.89% | 73.38 ± 11.26% |
EEGNet | 80.76 ± 10.28% | 81.69 ± 9.63% | 79.76 ± 13.93% | 81.57 ± 10.92% | 80.24 ± 10.82% |
PSD + SVM | 73.97 ± 13.58% | 74.94 ± 14.11% | 72.92 ± 14.10% | 73.00 ± 13.37% | 74.05 ± 13.88% |
EEGNet | 76.54 ± 11.77% | 76.63 ± 10.71% | 76.47 ± 15.71% | 78.08 ± 13.00% | 75.71 ± 11.96% |
PSD + SVM | 68.51 ± 12.04% | 67.87 ± 13.02% | 68.98 ± 12.86% | 68.28 ± 12.31% | 68.88 ± 12.18% |
EEGNet | 80.13 ± 10.74% | 78.63 ± 12.34% | 81.44 ± 11.58% | 80.53 ± 11.87% | 80.02 ± 10.20% |
PSD + SVM | 63.08 ± 11.77% | 63.10 ± 13.40% | 62.99 ± 12.27% | 62.66 ± 12.39% | 63.65 ± 11.42% |
EEGNet | 78.74 ± 12.55% | 76.82 ± 15.65% | 80.06 ± 12.35% | 78.38 ± 13.38% | 79.38 ± 12.85% |
PSD + SVM | 73.48 ± 14.76% | 71.14 ± 16.33% | 75.49 ± 13.93% | 73.26 ± 15.45% | 73.57 ± 14.42% |
EEGNet | 78.74 ± 12.56% | 76.82 ± 12.34% | 81.44 ± 11.58% | 80.53 ± 11.87% | 80.02 ± 10.20% |
PSD + SVM | 73.10 ± 13.30% | 70.28 ± 15.19% | 75.60 ± 13.02% | 73.24 ± 14.40% | 72.96 ± 12.72% |
EEGNet | 78.41 ± 11.95% | 77.95 ± 14.34% | 78.79 ± 12.01% | 77.71 ± 12.68% | 79.44 ± 12.03% |
PSD + SVM | 63.02 ± 11.13% | 62.97 ± 12.87% | 63.06 ± 11.11% | 63.11 ± 11.29% | 63.06 ± 11.42% |
EEGNet | 71.20 ± 11.68% | 71.32 ± 14.68% | 71.01 ± 11.40% | 71.04 ± 12.19% | 71.69 ± 11.88% |
Method | Accuracy | Recall | Recall | Recall | Recall |
---|---|---|---|---|---|
PSD + SVM | 41.78 ± 13.34% | 46.55 ± 14.20% | 39.48 ± 15.62% | 49.97 ± 20.06% | 40.31 ± 16.44% |
EEGNet | 54.87 ± 14.51% | 56.33 ± 17.68% | 55.63 ± 20.57% | 57.47 ± 22.22% | 54.79 ± 19.40% |
Recall | Precision | Precision | Precision | Precision | Precision |
33.01 ± 16.30% | 39.02 ± 11.88% | 36.69 ± 16.51% | 54.32 ± 20.05% | 43.53 ± 17.25% | 38.40 ± 16.06% |
50.16 ± 18.31% | 55.00 ± 14.06% | 52.24 ± 19.13% | 60.60 ± 20.71% | 54.07 ± 16.19% | 52.63 ± 14.78% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Alonso-Vázquez, D.; Mendoza-Montoya, O.; Caraza, R.; Martinez, H.R.; Antelis, J.M. EEG-Based Classification of Spoken Words Using Machine Learning Approaches. Computation 2023, 11, 225. https://doi.org/10.3390/computation11110225
Alonso-Vázquez D, Mendoza-Montoya O, Caraza R, Martinez HR, Antelis JM. EEG-Based Classification of Spoken Words Using Machine Learning Approaches. Computation. 2023; 11(11):225. https://doi.org/10.3390/computation11110225
Chicago/Turabian StyleAlonso-Vázquez, Denise, Omar Mendoza-Montoya, Ricardo Caraza, Hector R. Martinez, and Javier M. Antelis. 2023. "EEG-Based Classification of Spoken Words Using Machine Learning Approaches" Computation 11, no. 11: 225. https://doi.org/10.3390/computation11110225
APA StyleAlonso-Vázquez, D., Mendoza-Montoya, O., Caraza, R., Martinez, H. R., & Antelis, J. M. (2023). EEG-Based Classification of Spoken Words Using Machine Learning Approaches. Computation, 11(11), 225. https://doi.org/10.3390/computation11110225