Nothing Special   »   [go: up one dir, main page]

CN114699078A - Emotion recognition method and system based on small number of channel EEG signals - Google Patents

Emotion recognition method and system based on small number of channel EEG signals Download PDF

Info

Publication number
CN114699078A
CN114699078A CN202210227246.7A CN202210227246A CN114699078A CN 114699078 A CN114699078 A CN 114699078A CN 202210227246 A CN202210227246 A CN 202210227246A CN 114699078 A CN114699078 A CN 114699078A
Authority
CN
China
Prior art keywords
signals
signal
electroencephalogram
basic
emotion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210227246.7A
Other languages
Chinese (zh)
Inventor
邓欣
吕向伟
肖立峰
杨鹏飞
刘珂
陈乔松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN202210227246.7A priority Critical patent/CN114699078A/en
Publication of CN114699078A publication Critical patent/CN114699078A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/372Analysis of electroencephalograms
    • A61B5/374Detecting the frequency distribution of signals, e.g. detecting delta, theta, alpha, beta or gamma waves
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/377Electroencephalography [EEG] using evoked responses
    • A61B5/378Visual stimuli
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7203Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/725Details of waveform analysis using specific filters therefor, e.g. Kalman or adaptive filters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7253Details of waveform analysis characterised by using transforms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Psychiatry (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Pathology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Artificial Intelligence (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physiology (AREA)
  • Psychology (AREA)
  • Child & Adolescent Psychology (AREA)
  • Developmental Disabilities (AREA)
  • Educational Technology (AREA)
  • Hospice & Palliative Care (AREA)
  • Social Psychology (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention belongs to the field of brain-computer interfaces, and particularly relates to an emotion recognition method and system based on a small number of channel EEG (electroencephalogram), which comprises the steps of acquiring 4 channels of EEG signals of a user scalp and preprocessing the EEG signals; converting the preprocessed electroencephalogram signals into basic signals, and decomposing and reconstructing the basic signals to obtain a series of component signals; inputting the smoothed features into a trained classification model to detect the emotion of the user; the invention uses the EEG signals with fewer channels to detect the emotion of the user, the user only needs to carry out emotion mobilization according to the actual scene, the PC analyzes the collected EEG signals of the cerebral cortex of the user, and the final characteristics are input into the classifier for classification through preprocessing and characteristic extraction, so that the emotion category of the user at that time can be reflected.

Description

Emotion recognition method and system based on small number of channel EEG signals
Technical Field
The invention belongs to the field of brain-computer interfaces, and particularly relates to a method and a system for emotion recognition based on a small number of channel EEG.
Background
Emotions play an important role in daily life, and encompass psychological states such as positive emotions, negative emotions, alertness and fatigue, which affect a person's physiological state, and researchers in various fields analyze the psychological state of the person using data such as facial expressions, voice signals, short messages, electrocardiograms and electroencephalograms, and find that electroencephalogram signals are least affected by differences in personal characteristics. However, the channel of the electroencephalogram signal has noise, and the signal-to-noise ratio is low, so that the analysis and the processing of the electroencephalogram signal are limited.
With the rapid development of computers, wireless communication, and intelligent computing technologies, researchers are constantly developing intelligent wearable devices to improve people's lives. However, currently, few wearable devices for emotion recognition based on electroencephalogram (EEG) are in the market, which is caused by limitations of electroencephalogram signal acquisition technologies, noise of electroencephalograms acquired by a head-mounted electroencephalogram cap is large, and meanwhile, a data acquisition mode and an emotion recognition algorithm based on multiple channels in a laboratory are complex at present, and are difficult to apply to wearable devices.
In recent years, most of researches on emotion recognition based on electroencephalogram signals by many researchers are focused on improving classification accuracy under the condition of a large number of channels, and the algorithms are difficult to apply to wearable equipment. Meanwhile, electroencephalogram data of all channels of the cerebral cortex contain relevant information of all brain areas, but for specific tasks, the application of all channels to wearable equipment is not facilitated, and the application of algorithms to the mass market is facilitated. Therefore, the practicability of the algorithm on the wearable equipment can be greatly improved by adopting fewer channels to collect real-time data and finish classification tasks.
Disclosure of Invention
In order to solve the problems, the invention provides a method and a system for emotion recognition based on a small number of channel EEG.
In a first aspect, the present invention provides a method for emotion recognition based on a small number of channel EEG, comprising:
s1, acquiring electroencephalogram signals of 4 channels of a user scalp, and preprocessing the electroencephalogram signals;
s2, converting the preprocessed electroencephalogram signals into basic signals, and decomposing and reconstructing the basic signals to obtain a series of component signals;
s3, extracting differential entropy of the basic signal and the component signal as features, and smoothing the features;
and S4, inputting the smoothed features into a trained classification model to detect the emotion of the user.
Further, the 4 channels of the user's scalp are the FP1, FP2 channel located at the prefrontal lobe and the T7, T8 channel located at the temporal lobe.
Further, preprocessing is carried out on electroencephalogram signals of FP1, FP2, T7 and T8 channels, and basic signals are constructed by adopting potential differences among channel electrodes, namely, 6 basic signals are obtained by subtracting every two of the four preprocessed electroencephalogram signals and are respectively FP1-T7, FP1-T8, FP1-FP2, FP2-T7, FP2-T8 and T7-T8.
Further, the process of preprocessing the electroencephalogram signals is as follows:
performing down-sampling on the electroencephalogram signal to obtain an electroencephalogram signal of 200 Hz;
and filtering the 200Hz electroencephalogram signal by adopting a Butterworth filter to obtain the filtered electroencephalogram signal.
Further, the original time scale decomposition, discrete wavelet transformation, variational modal decomposition and phase space reconstruction are adopted to decompose and reconstruct the basic signal, thereby expanding the existing mode, wherein:
inherent time scale decomposition processes the basic signal in an affine linear transformation mode to obtain a PRC1 rotation component signal and a PRC2 rotation component signal;
the discrete wavelet transform extracts component signals of four physiological electroencephalogram wave bands including Gamma, Beta, Alpha and Theta from a basic signal by scaling and shifting a mother wavelet function;
the variational modal decomposition adopts a parallel method to extract an inherent modal function from the basic signal to obtain 4 component signals of VMD1, VMD2, VMD3 and VMD 4;
and reconstructing the basic signal by phase space reconstruction to obtain a 3D-PSR-ED component signal.
Further, filtering components irrelevant to the emotional state in the features by adopting a linear dynamic system to finish feature smoothing, wherein the linear dynamic system is expressed as:
xt=zt+wt
zt=Azt-1+vt
wherein xtRepresenting an observed variable, ztRepresenting hidden emotion variables, A represents a transition matrix, wtRepresents a mean value of
Figure BDA0003536436090000031
Gaussian noise with variance Q, vtRepresents a mean value of
Figure BDA0003536436090000032
Gaussian noise with variance R.
On the basis of the first aspect, the invention provides an emotion recognition system based on a small number of channel EEG signals, which comprises:
the electroencephalogram signal collection module is used for collecting electroencephalogram signals generated in FP1, FP2, T7 and T84 channels of the scalp during emotion detection of a target user;
the signal preprocessing module is used for processing the 4-channel electroencephalogram signals acquired by the electroencephalogram signal collecting module and converting the 4-channel electroencephalogram signals into basic signals;
the signal decomposition and reconstruction module is used for decomposing and reconstructing the basic signal output by the signal preprocessing module to obtain a plurality of component signals;
the characteristic extraction module is used for extracting the characteristics of the basic signal and the component signal and smoothing the characteristics;
and the classification module is used for inputting the features after the smoothing treatment into the trained classification model and detecting the emotion of the user.
The invention has the beneficial effects that:
the invention provides a sentiment recognition method and system based on a small number of channel EEG, which detects the sentiment of a user by collecting EEG signals of a small number of channels, the user only needs to do sentiment mobilization according to the actual scene, a PC analyzes the collected EEG signals of the cerebral cortex of the user, and the final characteristics are input into a classifier for classification through preprocessing and characteristic extraction, so that the sentiment category of the user at the moment can be reflected.
Drawings
Fig. 1 is a flow chart of a method of emotion recognition based on a few-channel EEG of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
In order to improve emotion recognition precision and improve practicability of an algorithm on wearable equipment, the invention provides an emotion recognition method and system based on a small number of channel EEGs.
In one embodiment, a method for emotion recognition based on a small number of channel EEG, as shown in fig. 1, includes:
s1, acquiring electroencephalogram signals of 4 channels of a user scalp, and preprocessing the electroencephalogram signals;
s2, converting the preprocessed electroencephalogram signals into basic signals, and decomposing and reconstructing the basic signals to obtain a series of component signals;
s3, extracting differential entropy of the basic signal and the component signal as features, and smoothing the features;
and S4, inputting the smoothed features into a trained classification model to detect the emotion of the user.
Preferably, in order to accurately identify the emotion category of the user and simultaneously meet the requirement of convenience in designing the portable device, the FP1 and FP2 channels located in the forehead lobe and the FP 7 and T8 channels located in the temporal lobe are selected in the embodiment, and the 4 channels are located to provide convenience for designing wearable products such as head rings, earphones and glasses. For the electroencephalogram signals generated by the 4 channels of the mobile phone, in order to keep the spatial asymmetry of the electroencephalogram signals in the brain area as much as possible, potential differences among the channel electrodes are adopted to construct 6 original basic signals which are respectively as follows: each original basic signal is sliced according to a time period of 5 seconds, namely FP1-T7, FP1-T8, FP1-FP2, FP2-T7, FP2-T8 and T7-T8, and slices of 6 original basic signals in a time period are used as a group of basic signals for subsequent processing and feature extraction.
Preferably, in this embodiment, the process of preprocessing the electroencephalogram signal before converting it into the basic signal is as follows:
performing down-sampling on the electroencephalogram signal to obtain an electroencephalogram signal of 200 Hz;
filtering the 200Hz electroencephalogram signal by adopting a Butterworth filter, and intercepting the electroencephalogram signal after filtering in a frequency range of 0-49 Hz.
In one embodiment, the fundamental signal is decomposed and reconstructed using intrinsic time scale decomposition, discrete wavelet transform, variational modal decomposition, and phase space reconstruction.
Specifically, inherent time scale Decomposition (ITD), also called ITD method, can adaptively process nonlinear non-stationary signals, strictly limit the endpoint effect at the endpoint, and does not need to propagate the contaminated whole-segment data inwards, thereby overcoming the problem that the Empirical Mode Decomposition (EMD) method propagates the endpoint effect inwards to contaminate the whole-segment data. The ITD method processes a signal in the form of affine linear transformation, decomposes the signal into a plurality of non-interfering Proper Rotation Components (PRCs) and monotonic trend components in the frequency domain, and obtains better transient characteristics of the signal after decomposition. Since emotion recognition is mainly related to the high frequency signals of the human brain, the first two proper rotation components PRC1, PRC2 are mainly selected as the main proper rotation components in the present embodiment.
In particular, Discrete Wavelet Transform (DWT) is a discretization of the scale and translation of the basic Wavelet, which is decomposed into a series of functions called wavelets by scaling and shifting the mother Wavelet, while forming a decomposed set of Wavelet coefficients. In the embodiment, 5-level DWT with three-order Daubechies (db3) wavelet functions is used, five physiological electroencephalogram bands are extracted, namely Gamma (31-48Hz), Beta (14-30), Alpha (8-13Hz), Theta (4-7Hz) and Delta (0.5-3Hz), and then Delta wave is removed, because the appearance of the wave is often related to human sleep, and the wave plays a very small role in the emotion recognition task.
Specifically, the Variational Modal Decomposition (VMD) has a strict mathematical model and a solid theoretical basis, can overcome the disadvantages of frequency aliasing and the like in the EMD and Local Mean Decomposition (LMD), and has a better noise filtering effect. Instead of using a recursive method, the VMD extracts the natural mode functions from the signal using a parallel method. The number of decomposition components, K, in the VMD needs to be predefined. If K is too small, it is not beneficial to get more modalities, but if K is too large, so-called over-segmentation, increases the computational load. Fortunately, one good property of VMDs is that when the signal is over-partitioned, the redundant pattern is essentially a noise component. In consideration of performance and noise reduction requirements, K is set to 4 in the present embodiment, that is, the first 4 natural mode components VMD1, VMD2, VMD3 and VMD4 are selected as main modes.
In particular, since the brain electrical signal has a strong continuity in time, it is sometimes necessary to search for patterns in time series and high-dimensional transformation of the time series. Although the phase space reconstruction method uses the values of a variable at different moments to form a phase space, the change of the variable of the power system is naturally related to the interaction of the variable and other variables of the system, namely the change of the variable along with time implies the dynamics law of the whole system. Therefore, the trajectory of the reconstructed phase space also reflects the evolution law of the system state. The phase space reconstruction technique has two key parameters: the embedded dimension D and delay time τ, in this embodiment, set D to 3 and τ to 1, acquire the 3D-PSR-ED component signal.
In one embodiment, differential entropy of the base signal and the component signal is extracted as features and the features are smoothed, that is, the base signal, the phase-space reconstructed signal (3D-PSR-ED), the inherent rotation amount decomposed by the ITD (PRC1, PRC2), the rhythm decomposed by the DWT (Gamma, Beta, Alpha, Theta), the component decomposed by the VMD (VMD1, VMD2, VMD3, VMD4), the differential entropy is extracted as features, each base signal can extract 1x12 features after the above processing, and finally 6 base signals obtain 6x12 features. For the extracted features, a Linear Dynamic System (LDS) method is applied over a time series to filter components that are not related to emotional state, thereby completing feature smoothing.
Specifically, the linear dynamic system is represented as:
xt=zt+wt
zt=Azt-1+vt
wherein xtRepresents an observed variable, ztRepresenting hidden emotion variables, A represents a transition matrix, wtRepresents a mean value of
Figure BDA0003536436090000061
Gaussian noise with variance Q, vtRepresents a mean value of
Figure BDA0003536436090000062
Gaussian noise with variance R.
On the basis, the invention provides an emotion recognition system based on a small number of channel EEG signals, which comprises:
the electroencephalogram signal collection module is used for collecting electroencephalogram signals generated in FP1, FP2, T7 and T84 channels of the scalp during emotion detection of a target user;
the signal preprocessing module is used for processing the 4-channel electroencephalogram signals acquired by the electroencephalogram signal collecting module and converting the 4-channel electroencephalogram signals into basic signals;
the signal decomposition and reconstruction module is used for decomposing and reconstructing the basic signal output by the signal preprocessing module to obtain a plurality of component signals;
the characteristic extraction module is used for extracting the characteristics of the basic signal and the component signal and smoothing the characteristics;
and the classification module is used for inputting the features after the smoothing treatment into the trained classification model and detecting the emotion of the user.
Specifically, a simple CNN structure similar to LeNet-5 is adopted in the classification module. Compared with LeNet5, the CNN model adopted in the embodiment achieves the purpose of reducing model parameters by reducing the number of layers and the number of convolution kernels, so that the model is more suitable for a small sample classification task based on artificial features. The specific structure of the model is shown in table 1, the CNN model includes three convolutional layers, a Convolution, two Pooling layers Max power, and a Linear layer, the Convolution kernels of the convolutional layers and the Pooling layers are all 2, the input data is subjected to maximum Pooling after passing through the two convolutional layers in sequence, the data subjected to maximum Pooling is convolved and pooled again, the result of the second Pooling is input into the Linear layer, and emotion recognition is performed through Softmax.
Table 1 architecture of CNN model of the invention
Figure BDA0003536436090000071
In one embodiment, experiments were conducted using a public emotion EEG data set named SEED, which uses movie fragments as emotion-inducing materials, including three classes: positive, neutral and negative emotions, participants watched 15 videos per set of experiments, 5 for each emotion type. The SEED data set contained 62 channels (electrodes) of electroencephalographic affective signals recorded by 15 participants (7 males and 8 females). In this embodiment, an experimental paradigm similar to the SEED data set is adopted, and data of 10 participants are collected in a laboratory, and since the method of the present invention only uses 4 channels, only 14 channels are collected during data collection, which are: FP1, FP2, F7, F8, F3, F4, FC5, FC6, T7, T8, P7, P8, O1, O2. In the embodiment, the experiment is carried out by adopting two modes of Type1 and Type2, and the experiment result is recorded by using Accuracy1 and Accuracy 2.
Type 1: collecting electroencephalogram signals generated by cerebral cortex when each participant watches 15 videos, dividing the electroencephalogram signals by taking 5 seconds as one trail, extracting all the trails, randomly disordering all the trails, extracting 20% of data in various emotion samples every time to serve as a test set, and taking the rest of data as a training set to finish a 5-fold cross validation experiment.
And (2) selecting the electroencephalogram signals acquired when each participant watches 3 videos (positive, negative and neutral) each time, taking all trail extracted from the electroencephalogram signals as a test set to watch the electroencephalograms acquired when other videos are watched as a training set, and completing a 5-fold cross validation experiment.
In Type1 experiments, the method achieved average classification accuracy of 98.67% and 99.97% on the SEED data set and the data set of the present invention, respectively. While Type2 experiments ensure that the training set and the test set come from completely different time periods, which is much closer to a real usage scenario. An average accuracy of 90.77% was obtained on the SEED data and an average accuracy of 91.38% was obtained on the inventive data set.
Experimental results show that the optimal performance can be obtained in the Type1 mode. However, since the emotional state of a person has a certain duration in time, it does not change significantly in a very short time. The operation of disturbing the whole data in the Type1 experiment can cause the trails originally extracted in a period of continuous time to be simultaneously distributed in the training set and the test set, thereby causing data leakage. This is also a significant reason why the results of either the SEED data set or the data set of the present invention were significantly better than the Type2 in the Type1 experiment. In addition, the difference between participants is also an important reason for the difference of experimental results, which may be caused by subjective personal consciousness, growth environment and physiological state, and also can be caused by objective experimental environment.
In order to verify the effectiveness of the method, according to a classical electroencephalogram emotion recognition method, experiments are carried out under a 62-channel SEED data set and a 14-channel data set, and PSD (phase position detector) characteristics, DE (DE) characteristics and normalized first-order difference (1) are respectively extracted from each channel of five specific frequency bands (Gamma, Beta, Alpha, Theta and Delta) of the electroencephalogramSTOD) are input into the SVM, Type1 and Type2 experiments are respectively carried out, the method is compared with a classical method on the same characteristics and classifiers (the classical method usually only adopts discrete wavelet transform to carry out signal decomposition), PSD characteristics and DE characteristics are obtained, LDS characteristics smoothing is carried out on the DE characteristics, and then the results are applied to the SVM and the text CNN model. The results are shown in tables 2 and 3:
table 2 comparison of performance using 4 channels on SEED data set with conventional 62 channels
Figure BDA0003536436090000091
Table 3 comparison of performance of the inventive data set using 4 channels with the conventional 14 channels
Figure BDA0003536436090000092
As table 1, table 2 results show that although only 4 channels of electroencephalographic data were collected in the methods herein, the present invention achieved average accuracy better than the 62-channel SEED data set with the same experimental patterns, features and classifiers. On the data set of the invention, the method of the invention is superior to the traditional method when the characteristic is 1stOD, and the performance difference of the method of the invention is very small compared with the multi-channel of the traditional method when the characteristic is PSD or DE.
In addition, under the DE characteristic, the comparison of the results of using SVM and CNN after LDS characteristic smoothing shows that great performance improvement can be obtained through the improvement of the characteristic smoothing and classification method.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (7)

1. A method for emotion recognition based on a small number of channel EEG signals, comprising:
s1, acquiring electroencephalogram signals of 4 channels of a user scalp, and preprocessing the electroencephalogram signals;
s2, converting the preprocessed electroencephalogram signals into basic signals, and decomposing and reconstructing the basic signals to obtain a series of component signals;
s3, extracting differential entropy of the basic signal and the component signal as features, and smoothing the features;
and S4, inputting the smoothed features into a trained classification model to detect the emotion of the user.
2. The method of claim 1, wherein the 4 channels of the user's scalp are the FP1, FP2 channel located in the prefrontal lobe and the T7, T8 channel located in the temporal lobe.
3. The method as claimed in claim 2, wherein the electroencephalogram signals of the FP1, FP2, T7 and T8 channels are preprocessed, and the potential difference between the channel electrodes is used to construct a basic signal, i.e. the four preprocessed electroencephalogram signals are subtracted from each other to obtain 6 basic signals, which are FP1-T7, FP1-T8, FP1-FP2, FP2-T7, FP2-T8 and T7-T8, respectively.
4. The method for emotion recognition based on EEG signals of a small number of channels as claimed in claim 1 or 3, wherein the preprocessing of EEG signals is performed by:
performing down-sampling on the electroencephalogram signal to obtain an electroencephalogram signal of 200 Hz;
and filtering the 200Hz electroencephalogram signal by adopting a Butterworth filter to obtain the filtered electroencephalogram signal.
5. The method of claim 1, wherein the fundamental signal is decomposed and reconstructed using intrinsic time scale decomposition, discrete wavelet transform, variational modal decomposition, and phase space reconstruction, respectively, wherein:
inherent time scale decomposition processes the basic signal in an affine linear transformation mode to obtain a PRC1 rotation component signal and a PRC2 rotation component signal;
extracting component signals of four physiological electroencephalogram wave bands including Gamma, Beta, Alpha and Theta from the basic signals by adopting discrete wavelet transformation;
the variational modal decomposition adopts a parallel method to extract an inherent modal function from the basic signal to obtain 4 component signals of VMD1, VMD2, VMD3 and VMD 4;
and reconstructing the basic signal by phase space reconstruction to obtain a 3D-PSR-ED component signal.
6. The method of claim 1, wherein the linear dynamics system is used to filter components of the features that are not associated with emotional states, and perform feature smoothing, wherein the linear dynamics system is represented as:
xt=zt+wt
zt=Azt-1+vt
wherein xtRepresenting an observed variable, ztRepresenting hidden emotion variables, A represents a transition matrix, wtRepresents a mean value of
Figure FDA0003536436080000021
Gaussian noise with variance Q, vtRepresents a mean value of
Figure FDA0003536436080000022
Gaussian noise with variance R.
7. An emotion recognition system based on a small number of channel EEG signals, comprising:
the electroencephalogram signal collection module is used for collecting electroencephalogram signals generated in FP1, FP2, T7 and T84 channels of the scalp during emotion detection of a target user;
the signal preprocessing module is used for processing the 4-channel electroencephalogram signals acquired by the electroencephalogram signal collecting module and converting the 4-channel electroencephalogram signals into basic signals;
the signal decomposition and reconstruction module is used for decomposing and reconstructing the basic signal output by the signal preprocessing module to obtain a plurality of component signals;
the characteristic extraction module is used for extracting the characteristics of the basic signal and the component signal and smoothing the characteristics;
and the classification module is used for inputting the features after the smoothing treatment into the trained classification model and detecting the emotion of the user.
CN202210227246.7A 2022-03-08 2022-03-08 Emotion recognition method and system based on small number of channel EEG signals Pending CN114699078A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210227246.7A CN114699078A (en) 2022-03-08 2022-03-08 Emotion recognition method and system based on small number of channel EEG signals

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210227246.7A CN114699078A (en) 2022-03-08 2022-03-08 Emotion recognition method and system based on small number of channel EEG signals

Publications (1)

Publication Number Publication Date
CN114699078A true CN114699078A (en) 2022-07-05

Family

ID=82167918

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210227246.7A Pending CN114699078A (en) 2022-03-08 2022-03-08 Emotion recognition method and system based on small number of channel EEG signals

Country Status (1)

Country Link
CN (1) CN114699078A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116127366A (en) * 2023-04-17 2023-05-16 深圳市齐奥通信技术有限公司 Emotion recognition method, system and medium based on TWS earphone
WO2024174713A1 (en) * 2023-02-22 2024-08-29 深圳大学 Electroencephalogram-based emotion recognition circuit and system, and chip

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007190408A (en) * 2005-11-01 2007-08-02 Action Research:Kk Vibration presentation apparatus
WO2019025000A1 (en) * 2017-08-03 2019-02-07 Toyota Motor Europe Method and system for determining a driving intention of a user in a vehicle using eeg signals
CN110638445A (en) * 2019-09-16 2020-01-03 昆明理工大学 SSVEP-based few-channel electroencephalogram signal acquisition device
CN110897648A (en) * 2019-12-16 2020-03-24 南京医科大学 Emotion recognition classification method based on electroencephalogram signal and LSTM neural network model
CN112465069A (en) * 2020-12-15 2021-03-09 杭州电子科技大学 Electroencephalogram emotion classification method based on multi-scale convolution kernel CNN
US20210141453A1 (en) * 2017-02-23 2021-05-13 Charles Robert Miller, III Wearable user mental and contextual sensing device and system
CN113116307A (en) * 2021-04-26 2021-07-16 西安领跑网络传媒科技股份有限公司 Sleep staging method, computer-readable storage medium and program product
CN113208633A (en) * 2021-04-07 2021-08-06 北京脑陆科技有限公司 Emotion recognition method and system based on EEG brain waves
CN114052735A (en) * 2021-11-26 2022-02-18 山东大学 Electroencephalogram emotion recognition method and system based on depth field self-adaption

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007190408A (en) * 2005-11-01 2007-08-02 Action Research:Kk Vibration presentation apparatus
US20210141453A1 (en) * 2017-02-23 2021-05-13 Charles Robert Miller, III Wearable user mental and contextual sensing device and system
WO2019025000A1 (en) * 2017-08-03 2019-02-07 Toyota Motor Europe Method and system for determining a driving intention of a user in a vehicle using eeg signals
CN110638445A (en) * 2019-09-16 2020-01-03 昆明理工大学 SSVEP-based few-channel electroencephalogram signal acquisition device
CN110897648A (en) * 2019-12-16 2020-03-24 南京医科大学 Emotion recognition classification method based on electroencephalogram signal and LSTM neural network model
CN112465069A (en) * 2020-12-15 2021-03-09 杭州电子科技大学 Electroencephalogram emotion classification method based on multi-scale convolution kernel CNN
CN113208633A (en) * 2021-04-07 2021-08-06 北京脑陆科技有限公司 Emotion recognition method and system based on EEG brain waves
CN113116307A (en) * 2021-04-26 2021-07-16 西安领跑网络传媒科技股份有限公司 Sleep staging method, computer-readable storage medium and program product
CN114052735A (en) * 2021-11-26 2022-02-18 山东大学 Electroencephalogram emotion recognition method and system based on depth field self-adaption

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
WANG, ZM ET AL: "Channel Selection Method for EEG Emotion Recognition Using Normalized Mutual Information", 《IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC》, 31 December 2019 (2019-12-31) *
ZHANG, XW. ET AL: "Individual Similarity Guided Transfer Modeling for EEG-based Emotion Recognition", 《2019 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE (BIBM) 》, 31 December 2019 (2019-12-31) *
佚名: "EEG Emotion Recognition Based on 3DC-BGRU", 《COMPUTER ENGINEERING AND APPLICATIONS 》, 15 October 2020 (2020-10-15) *
刘群;张振: "基于智能移动终端触屏行为的情绪识别", 《 微电子学与计算机》, 15 June 2016 (2016-06-15) *
朱嘉祎: "情绪识别的脑电信号稳定模式研究", 《基础科学;医药卫生科技;信息科技》, 15 March 2020 (2020-03-15) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024174713A1 (en) * 2023-02-22 2024-08-29 深圳大学 Electroencephalogram-based emotion recognition circuit and system, and chip
CN116127366A (en) * 2023-04-17 2023-05-16 深圳市齐奥通信技术有限公司 Emotion recognition method, system and medium based on TWS earphone

Similar Documents

Publication Publication Date Title
Hamad et al. Feature extraction of epilepsy EEG using discrete wavelet transform
Shaker EEG waves classifier using wavelet transform and Fourier transform
CN107961007A (en) A kind of electroencephalogramrecognition recognition method of combination convolutional neural networks and long memory network in short-term
CN110969108B (en) Limb action recognition method based on autonomic motor imagery electroencephalogram
CN114533086B (en) Motor imagery brain electrolysis code method based on airspace characteristic time-frequency transformation
Huang et al. A review of electroencephalogram signal processing methods for brain-controlled robots
CN111184509A (en) Emotion-induced electroencephalogram signal classification method based on transfer entropy
CN113558644B (en) Emotion classification method, medium and equipment for 3D matrix and multidimensional convolution network
CN114699078A (en) Emotion recognition method and system based on small number of channel EEG signals
Shi et al. Feature recognition of motor imaging EEG signals based on deep learning
Geng et al. [Retracted] A Fusion Algorithm for EEG Signal Processing Based on Motor Imagery Brain‐Computer Interface
Hassan et al. Review of EEG Signals Classification Using Machine Learning and Deep-Learning Techniques
Saha et al. Automatic emotion recognition from multi-band EEG data based on a deep learning scheme with effective channel attention
Rajashekhar et al. Electroencephalogram (EEG) signal classification for brain–computer interface using discrete wavelet transform (DWT)
CN112244880B (en) Emotion-induced electroencephalogram signal analysis method based on variable-scale symbol compensation transfer entropy
Cui et al. A Dual-Branch Interactive Fusion Network to Remove Artifacts From Single-Channel EEG
Deng et al. The classification of motor imagery eeg signals based on the time-frequency-spatial feature
Wang A modified motor imagery classification method based on eegnet
Murad et al. Unveiling Thoughts: A Review of Advancements in EEG Brain Signal Decoding into Text
Alpturk et al. Analysis of relation between motor activity and imaginary EEG records
Gupta et al. A three phase approach for mental task classification using EEG
Zhao et al. GTSception: a deep learning eeg emotion recognition model based on fusion of global, time domain and frequency domain feature extraction
Kulkarni et al. Band decomposition of asynchronous electroencephalogram signal for upper limb movement classification
Lu et al. Dual-Stream Attention-TCN for EMG Removal from a Single-Channel EEG
Shoieb et al. Neurological disorder detection of brain abnormal activities using a new enhanced computer-aided model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination