Nothing Special   »   [go: up one dir, main page]

CN112754502A - Automatic music switching method based on electroencephalogram signals - Google Patents

Automatic music switching method based on electroencephalogram signals Download PDF

Info

Publication number
CN112754502A
CN112754502A CN202110040606.8A CN202110040606A CN112754502A CN 112754502 A CN112754502 A CN 112754502A CN 202110040606 A CN202110040606 A CN 202110040606A CN 112754502 A CN112754502 A CN 112754502A
Authority
CN
China
Prior art keywords
emotion
music
song
electroencephalogram
songs
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110040606.8A
Other languages
Chinese (zh)
Inventor
滕凯迪
赵倩
董宜先
单洪芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qufu Normal University
Original Assignee
Qufu Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qufu Normal University filed Critical Qufu Normal University
Priority to CN202110040606.8A priority Critical patent/CN112754502A/en
Publication of CN112754502A publication Critical patent/CN112754502A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7203Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/725Details of waveform analysis using specific filters therefor, e.g. Kalman or adaptive filters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7253Details of waveform analysis characterised by using transforms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • A61M21/02Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis for inducing sleep or relaxation, e.g. by direct nerve stimulation, hypnosis, analgesia
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • A61M2021/0005Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
    • A61M2021/0027Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the hearing sense

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Physiology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychology (AREA)
  • Anesthesiology (AREA)
  • Acoustics & Sound (AREA)
  • Hematology (AREA)
  • Evolutionary Computation (AREA)
  • Pain & Pain Management (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Child & Adolescent Psychology (AREA)
  • Developmental Disabilities (AREA)
  • Educational Technology (AREA)
  • Hospice & Palliative Care (AREA)
  • Social Psychology (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention discloses a music automatic switching method based on electroencephalogram signals, which is characterized in that when music is applied to playing songs, the emotion of a current user is identified through the electroencephalogram signals and classified, song classification matched with the emotion signals is searched according to the emotion state matching relation obtained in advance, one song is automatically selected from the song classification as a playing song, and meanwhile, the next suitable playing song is automatically generated based on the real-time emotion dynamic change condition reflected by electroencephalogram data. The emotional state of the user is induced by the music, the emotional state is evaluated, the relevance between individual emotional experience and music performance is analyzed, the music is subjected to similarity analysis based on content, a personalized music recommendation playing system is established, and theoretical support and help can be provided for assisting music treatment.

Description

Automatic music switching method based on electroencephalogram signals
Technical Field
The invention relates to the technical field of music treatment, in particular to an automatic music switching method based on electroencephalogram signals.
Background
The emotion is a psychological and physiological state of a person, and various valuable information such as a personality preference and a physical and mental state is stored. Nowadays, the economy is rapidly developed, the pressure faced by people is unprecedentedly increased, and the probability of nervous psychological diseases such as depression, anxiety and the like is rapidly increased due to poor emotion. In order to delay symptoms, a number of non-pharmaceutical interventions are of widespread interest, such as musical interventions. Music is a powerful, non-threatening medium that can be felt by listeners to a variety of emotions from joy to depression. In the field of music therapy, it is common knowledge that music has a great influence on the emotion of a person, emphasizes the emotion to determine the cognitive behavior of the person, and can affect the perception, thinking, memory and decision of a patient to different degrees.
Although songs can be recommended by existing music applications, most of the existing music applications are judged based on keyword search or user similar behaviors, sometimes, users cannot hear music which is good at will and accords with moods well, therefore, personalized recommendation of music according to individual emotional changes is a direction which is worthy of research and improvement, and how to find a corresponding relation between individual moods and music is an important problem which needs to be solved by a personalized music recommendation function.
Disclosure of Invention
Aiming at the discussion, the invention provides a music automatic switching method based on electroencephalogram signals, which can realize the function of automatically carrying out personalized music recommendation along with the fluctuation of the emotion of a user by acquiring the electroencephalogram signals to identify the emotion of the user.
When music is used for playing songs, the current user emotion is identified and classified through electroencephalogram signals, song classification matched with the current user emotion is found according to the user emotion classification, one song is selected from the song classification and used as a next playing song, and meanwhile, the next suitable playing song is automatically generated based on the real-time emotion dynamic change condition reflected by electroencephalogram data.
Preferably, an Emotiv Epoc + wireless Bluetooth electroencephalograph is adopted for acquiring electroencephalogram signals.
The emotion scale comprises a plurality of health testers who do not receive any professional music training, the health testers keep high concentration in a quiet state, electroencephalogram data of the testers are collected while songs are played, and after the songs are played, the testers can evaluate emotion categories of the played songs; the emotion classification evaluation refers to subjective scoring of 1-9 points on two dimensions of positive and negative valence and high and low arousal degree respectively; the foregoing test was repeated with several songs of different types, which were labeled as expressing in detail the personalized musical mood of the subject from an integer of 1-9.
Preferably, emotion classification based on the electroencephalogram signals is realized by constructing an electroencephalogram-emotion classification model, and the construction of the electroencephalogram-emotion classification model comprises the following steps:
step A1, collecting electroencephalogram signals and preprocessing the electroencephalogram signals;
a2, extracting characteristic information of the electroencephalogram signals to form a characteristic matrix;
step A3, performing feature fusion on the feature matrix to realize dimension reduction processing;
step A4, using an SVM classifier for the classification model, performing model training by using a training set, and obtaining a trained classification model through cross validation;
and step A5, matching the trained classification model with the emotion scale to obtain the electroencephalogram-emotion classification model.
Preferably, the preprocessing of the electroencephalogram signal in the step a1 includes amplification, filtering, wavelet de-noising, decomposition and reconstruction, wavelet transformation parameters related to the wavelet de-noising are determined by a genetic algorithm, the electroencephalogram signal is decomposed and reconstructed by utilizing a wavelet packet, and signals in various frequency band ranges are extracted, wherein the signals are delta rhythm, theta rhythm, alpha rhythm and beta rhythm respectively; the extraction of the characteristic information of the electroencephalogram signals in the step A2 comprises the steps of extracting power spectrum characteristics of four rhythms by using a Welch spectrum estimation algorithm and extracting Hurst indexes of the four rhythms; and the step A3 adopts a principal component analysis method to perform feature fusion on the feature matrix.
Preferably, the song classification is based on the association between the individual emotional provocation points and the songs, comprising the steps of:
step B1, down-sampling the song, and performing short-time Fourier transform on the song after framing;
step B2, extracting RMS characteristics and MFCC characteristics of the song;
b3, performing dimensionality reduction on the characteristic parameter clustering analysis, and extracting a clustering center and an intra-class variance by using a mixed iterative K-means clustering method;
and step B4, matching the subjectively marked emotion scale and the songs when the user listens to the songs on the basis of statistical analysis, mapping the electroencephalogram signals and the corresponding songs into each emotion coordinate quadrant according to the classical valence and arousal degree two-dimensional emotion model, and accordingly identifying the songs into the emotion quadrants to obtain the individual music-emotion model.
Preferably, the music library is used for searching for songs similar to the seed songs, the Gaussian mixture model parameters are trained by utilizing a maximum likelihood estimation method, the songs are subjected to content-based similarity analysis, the user specifies the similarity and selects the songs in the music library, and the music application automatically outputs results according to the selection of the user to realize song classification.
The invention has the beneficial effects that: 1. the emotional state of the user is induced by music, the emotional state is evaluated, and the relevance between individual emotional experience and music performance is analyzed, so that personalized automatic music switching is realized, and theoretical support and help can be provided for auxiliary music treatment; 2. the Emotiv Epoc + wireless Bluetooth electroencephalograph recorder collects electroencephalograms, and is portable in wearing; 3. searching for an optimal wavelet transformation parameter through a genetic algorithm to obtain optimal EEG signal noise reduction efficiency and obtain a pure EEG signal; 4. in consideration of the fact that the electroencephalogram signal is a typical non-stationary signal, the method extracts the power spectrum characteristic and the Hurst index of the electroencephalogram signal, fuses the extracted characteristic quantity through a principal component analysis method, and then obtains a trained classification model through cross validation of an SVM classifier, so that classification and emotional recognition of the electroencephalogram signal are more comprehensive and accurate.
Drawings
FIG. 1 is a block flow diagram of the present invention;
FIG. 2 is a 2D emotional model diagram;
FIG. 3 is a block diagram of a pre-processing flow of an electroencephalogram signal;
fig. 4 is a flow chart for automatically switching music through electroencephalogram.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments. The embodiments of the present invention have been presented for purposes of illustration and description, and are not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art. The embodiment was chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.
Example 1
The electroencephalogram signal is taken as a physiological electric signal of the central nervous system, is closely related to emotion, can objectively and comprehensively reflect the physiological and psychological states of a human body, and has important significance for researching emotion state classification. The core idea of the invention is as shown in fig. 1, the functional state and activity rule information of brain are deeply known through a window of electroencephalogram, the emotional state of a user is analyzed according to the change of electroencephalogram signals, the effective evaluation on the emotional state of the user is realized, several key music characteristics which have the greatest influence on the emotion of the user are found through analyzing the relevance between individual emotional experience and music performance, and the user is moved to an ideal target emotional state through music regulation, so that the individual music intervention treatment effect of the user in different emotional states is obvious.
Training electroencephalogram-emotion classification model
1. Collecting brain electrical signal
Fin data website select 16 different types of seed songs to be respectively played to 10 health testers who do not receive any professional music training through emotion tags of the tester, and the testers are required to keep high attention in a quiet state. The method comprises the steps of playing songs and simultaneously collecting electroencephalogram data of a tester; after the song is played, the tester evaluates the emotion classification of the played song, wherein the emotion classification evaluation refers to subjective scoring of 1-9 points in two dimensions of positive and negative valence and high and low arousal degree respectively. Referring to fig. 2, the horizontal dimension is the potency, representing the range from pleasant to unpleasant; the vertical dimension is the arousal, representing the degree of emotion from peace to excitement. Defining 1-5 as L and 6-9 as H, is a binary problem for both valence and arousal classification. These integer values, labeled 1-9, represent in detail the personalized musical mood of the subject. For example, the song "Say Hey" is mapped to the first quadrant of emotion coordinates, corresponding to an excited and pleasant emotion; the song "Goodbye My love" is mapped to the third quadrant of the mood coordinates, corresponding to sad and depressed mood.
In the embodiment, an Emotiv Epoc + wireless Bluetooth electroencephalograph is used for acquiring electroencephalograms, and 14-channel electroencephalograms and inertial sensor signals can be acquired, wherein EEG channels comprise AF3, F7, F3, FC5, T7, P7, O1, O2, P8, T8, FC6, F4, F8 and AF4 in an international 10-20 system.
2. Electroencephalogram signal preprocessing
Because the electroencephalogram signal is weak, the electroencephalogram signal is easily interfered by space electromagnetic waves and other signals of a human body, and particularly under the interference of strong noise, the physiological electrical signal is easily covered, so that the electroencephalogram signal needs to be preprocessed to ensure the accuracy of a measurement result. The preprocessing process includes amplification, filtering, and wavelet de-noising, decomposition, and reconstruction performed in the upper computer control system, as shown in fig. 3.
The method comprises the step of amplifying electroencephalogram data by means of coupling of a differential amplification circuit and a DC blocking amplification circuit. The amplified signal may be mixed with high-frequency noise, low-frequency noise and power frequency interference, and these noises need to be suppressed by filtering.
The filtering circuit is composed of a high-pass filtering circuit, a low-pass filtering circuit and a 50Hz trap circuit, wherein the cut-off frequencies of the high-pass filtering circuit, the low-pass filtering circuit and the 50Hz trap circuit are respectively 1Hz and 30 Hz. On one hand, the filter circuit blocks the direct current level output by the preamplifier circuit and prevents the subsequent circuit from saturation; on the other hand, the noise bandwidth is limited, and various clutter aliased in the signal is eliminated; the trap circuit is used for further filtering out strong power frequency interference in the electroencephalogram signal acquisition process.
Thirdly, relatively pure electroencephalogram signals are obtained through wavelet denoising, the best wavelet transformation parameters for electroencephalogram signal denoising are searched through a genetic algorithm, so that the best electroencephalogram signal denoising efficiency is obtained, namely the mean square error MSE between the original electroencephalogram signals and the denoised electroencephalogram signals is minimized. Therefore, the mean square error is chosen as a fitness function,
Figure BDA0002894520810000061
the method for searching the optimal wavelet transformation parameters by using the genetic algorithm to perform noise reduction processing on the electroencephalogram signals mainly comprises the following steps:
initializing the acquired electroencephalogram signals, and calculating SNR and MSE of the input electroencephalogram signals.
Secondly, adjusting wavelet transformation parameters by using a genetic algorithm. Initializing a meta-heuristic operator, initializing a wavelet basis function phi, a threshold function beta, a thresholding selection rule parameter lambda and a decomposition layer number L, setting a proper wavelet threshold denoising parameter range for the electroencephalogram signal, and constructing a target function. And (4) according to the noise suppression performance, iteratively refining the randomly generated solution by using a genetic algorithm to obtain an optimal noise reduction parameter, and outputting an optimal solution.
The optimal wavelet transformation parameters finally determined by the system are as follows: the Transform type is DWT, namely discrete wavelet Transform; adopting db04 wavelet in Daubechies wavelet series; setting the Level to be 5, namely performing 5-layer decomposition on the signal by wavelet transformation; soft threshold is selected.
Thirdly, decomposing and reconstructing the electroencephalogram signal by using the optimal wavelet transformation parameters.
And fourthly, evaluating the noise reduction effect of the electroencephalogram signal. And evaluating the noise reduction effect through two general standards, and if the maximum signal-to-noise ratio and the minimum mean square error are obtained, indicating that the best noise reduction effect is obtained.
(4) The wavelet packet is utilized to decompose and reconstruct the electroencephalogram signals, signals of all frequency band ranges are extracted, wherein the signals are delta rhythm (1-4Hz), theta rhythm (4-8Hz), alpha rhythm (8-12Hz) and beta rhythm (13-30Hz), and the running state and the activity rule of the brain can be easily seen from the time domain waveforms of all basic rhythms.
3. Extraction of characteristic information of electroencephalogram signal
The method comprises the steps of extracting power spectrum features of four rhythms by using a Welch spectrum estimation algorithm. The Welch spectral estimation algorithm is a modified periodogram method, which allows data to be superimposed and windowing to be applied to each segment of data, better improving variance performance and resolution. Firstly, power spectrum estimation is carried out through DFT conversion, and the average power spectrum of each wave band is obtained through calculation; then, taking the logarithm of all data and then averaging; finally, each state of each tester corresponds to a value for subsequent analysis.
And extracting the Hurst indexes of the four rhythm waves. The Hurst index is a parameter for measuring the smoothness of a fractal time sequence and representing the non-stationary behavior of an electroencephalogram signal.
R(m)/S(m)=α×kHWhere k is 1, 2, …, N, r (m) is the range, s (m) is the standard deviation, α is the coefficient, and H is the Hurst index.
Taking logarithm of both sides of the above formula to obtain log (R/S)mAnd fitting the log (k) and the log (alpha) by using a least square method to obtain the slope, thus obtaining the Hurst index H.
4. And (4) classification of the electroencephalogram signals. And further feeding the extracted features back to a classifier, selecting a support vector machine classifier to train a training set, and constructing an emotion recognition model by adopting a cross validation mode to quickly finish recognition and classification of different emotions.
5. And matching the trained classification model with the emotion model subjectively evaluated by the testee to obtain an electroencephalogram-emotion classification model, wherein the obtained model is used as a more objective label basis and is applied to the music emotion classification model.
And secondly, searching for the relation between the individual emotion induction points and the music, and establishing an individual music-emotion model.
1. And (3) reducing the sampling rate of the song to 22050Hz, and performing short-time Fourier transform on the song after framing processing.
2. Extracting RMS (root mean square) characteristics and MFCC (Mel frequency cepstrum coefficient) characteristics of all frames of the song, wherein the RMS represents the integral energy condition of the song, the quieter song is lower in RMS, and conversely, the music with the faster rhythm is generally higher in RMS; MFCC is a speech feature parameter that generally exhibits the stylistic nature of music.
3. Because a song contains a plurality of frames, the dimension reduction processing needs to be carried out on the characteristic parameter clustering analysis, the mixed iterative K mean value clustering method is adopted to realize the dimension reduction processing, and the clustering center and the intra-class variance are extracted.
4. Music is a carrier of emotion, an emotion scale which is subjectively marked when a user listens to songs is matched with songs on the basis of statistical analysis, electroencephalograms and corresponding songs are mapped into each emotion coordinate quadrant according to a classical valence and arousal degree two-dimensional emotion model, and therefore the songs are also marked into the emotion quadrants, and an individual music-emotion model is obtained.
And thirdly, music similarity calculation, electroencephalogram real-time identification of emotional states and automatic switching process of corresponding music.
Searching for a song similar to a seed song in a music library, training GMM (Gaussian mixture model) parameters by using a maximum likelihood estimation method, and analyzing the similarity of the song based on contents, wherein the maximum likelihood estimation method is carried out by adopting iterative computation of an EM algorithm (expectation maximization algorithm), the algorithm comprises two main steps of expecting step E and maximizing step M, the step E utilizes a current parameter set to compute an expectation value of a likelihood function of complete data, the step M obtains new parameters by computing the maximization expectation function, and the step E and the step M are iterated until convergence.
The user may specify a similarity (e.g., songs having a selected similarity greater than 93%) and selected songs in the music library, and the music application automatically outputs the results to effect song categorization.
Referring to fig. 4, the music application includes a music playing module for playing a seed song to induce emotion and an emotion estimation module for estimating an emotional state of a user through electroencephalogram. In the song playing stage, according to the real-time emotion state reflected by the electroencephalogram data of the user, song classification matched with the emotion state is found based on the emotion state matching relation obtained in advance, and the next song suitable for playing is automatically generated from the song classification.
It is to be understood that the described embodiments are merely a few embodiments of the invention, and not all embodiments. All other embodiments, which can be derived by one of ordinary skill in the art and related arts based on the embodiments of the present invention without any creative effort, shall fall within the protection scope of the present invention.

Claims (9)

1. The method is characterized in that when music is used for playing songs, the current user emotion is identified through electroencephalogram signals and classified, song classification matched with the emotion classification is searched according to the emotion classification of the user, one song is automatically selected from the song classification as a playing song, and meanwhile, the next suitable playing song is automatically generated based on the real-time emotion dynamic change condition reflected by electroencephalogram data.
2. The automatic music switching method according to claim 1, wherein an Emotiv Epoc + wireless Bluetooth electroencephalograph is used for acquiring electroencephalogram signals.
3. The automatic music switching method according to claim 2, wherein a plurality of health testers who have not received any professional music training keep high concentration in a quiet state, acquire electroencephalogram data of the testers while playing songs, and after the songs are played, make the testers evaluate emotion categories of the played songs; the emotion classification evaluation refers to subjective scoring of 1-9 points on two dimensions of positive and negative valence and high and low arousal degree respectively; several songs of different types were selected to repeat the aforementioned test.
4. The automatic music switching method according to claim 3, wherein the emotion classification based on the electroencephalogram signal is realized by constructing an electroencephalogram-emotion classification model, and the construction of the electroencephalogram-emotion classification model comprises the following steps:
step A1, collecting electroencephalogram signals and preprocessing the electroencephalogram signals;
a2, extracting characteristic information of the electroencephalogram signals to form a characteristic matrix;
step A3, performing feature fusion on the feature matrix to realize dimension reduction processing;
step A4, using an SVM classifier for the classification model, performing model training by using a training set, and obtaining a trained classification model through cross validation;
and step A5, matching the trained classification model with the emotion scale to obtain the electroencephalogram-emotion classification model.
5. The method for automatically switching music according to claim 4, wherein the preprocessing of the EEG signals in step A1 includes amplification, filtering, wavelet de-noising, decomposition and reconstruction, wavelet transformation parameters involved in the wavelet de-noising are determined by a genetic algorithm, wavelet packets are used for decomposing and reconstructing the EEG signals, and signals in various frequency band ranges, including delta rhythm, theta rhythm, alpha rhythm and beta rhythm, are extracted.
6. The automatic music switching method according to claim 4, wherein said extracting characteristic information of electroencephalogram signals in step A2 includes extracting power spectrum characteristics of four rhythms using Welch spectrum estimation algorithm and extracting Hurst indexes of four rhythms.
7. The automatic music switching method according to claim 4, wherein said step A3 is implemented by feature fusion of feature matrix using principal component analysis.
8. The music automatic switching method according to claim 4, wherein the song classification is based on the association between the individual emotional evoked points and the songs, comprising the steps of:
step B1, down-sampling the song, and performing short-time Fourier transform on the song after framing;
step B2, extracting RMS characteristics and MFCC characteristics of the song;
b3, performing dimensionality reduction on the characteristic parameter clustering analysis, and extracting a clustering center and an intra-class variance by using a mixed iterative K-means clustering method;
and step B4, matching the subjectively marked emotion scale and the songs when the user listens to the songs on the basis of statistical analysis, mapping the electroencephalogram signals and the corresponding songs into each emotion coordinate quadrant according to the classical valence and arousal degree two-dimensional emotion model, and accordingly identifying the songs into the emotion quadrants to obtain the individual music-emotion model.
9. The automatic music switching method according to claim 8, wherein a music library is searched for songs that are similar to the seed songs, the parameters of the gaussian mixture model are trained by using a maximum likelihood estimation method, content-based similarity analysis is performed on the songs, a user specifies similarity and selects a song in the music library, and a music application automatically outputs a result according to the selection of the user to realize song classification.
CN202110040606.8A 2021-01-12 2021-01-12 Automatic music switching method based on electroencephalogram signals Pending CN112754502A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110040606.8A CN112754502A (en) 2021-01-12 2021-01-12 Automatic music switching method based on electroencephalogram signals

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110040606.8A CN112754502A (en) 2021-01-12 2021-01-12 Automatic music switching method based on electroencephalogram signals

Publications (1)

Publication Number Publication Date
CN112754502A true CN112754502A (en) 2021-05-07

Family

ID=75699980

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110040606.8A Pending CN112754502A (en) 2021-01-12 2021-01-12 Automatic music switching method based on electroencephalogram signals

Country Status (1)

Country Link
CN (1) CN112754502A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113974657A (en) * 2021-12-27 2022-01-28 深圳市心流科技有限公司 Training method, device and equipment based on electroencephalogram signals and storage medium
CN114139572A (en) * 2021-10-29 2022-03-04 杭州电子科技大学 Electroencephalogram emotion recognition method based on enhanced symmetric positive definite matrix
CN114384998A (en) * 2021-11-12 2022-04-22 南京邮电大学 Intelligent emotion state recognition and adjustment method based on electroencephalogram signals
CN114861274A (en) * 2022-05-10 2022-08-05 合肥工业大学 Real-time interactive space element optimization method based on EEG signal
CN114999611A (en) * 2022-07-29 2022-09-02 支付宝(杭州)信息技术有限公司 Model training and information recommendation method and device
CN116825060A (en) * 2023-08-31 2023-09-29 小舟科技有限公司 AI generation music optimization method based on BCI emotion feedback and related device

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102270210A (en) * 2010-12-30 2011-12-07 上海大学 MP3 audio attribute discretization method based on heterogeneity rule
CN102610234A (en) * 2012-04-09 2012-07-25 河海大学 Method for selectively mapping signal complexity and code rate
CN103412646A (en) * 2013-08-07 2013-11-27 南京师范大学 Emotional music recommendation method based on brain-computer interaction
US20150297109A1 (en) * 2014-04-22 2015-10-22 Interaxon Inc. System and method for associating music with brain-state data
CN106648058A (en) * 2016-10-10 2017-05-10 珠海格力电器股份有限公司 Song switching method and device
CN109190570A (en) * 2018-09-11 2019-01-11 河南工业大学 A kind of brain electricity emotion identification method based on wavelet transform and multi-scale entropy
CN109871831A (en) * 2019-03-18 2019-06-11 太原理工大学 A kind of emotion identification method and system
CN110353685A (en) * 2012-03-29 2019-10-22 昆士兰大学 For handling the method and apparatus of patient's sound
CN110946576A (en) * 2019-12-31 2020-04-03 西安科技大学 Visual evoked potential emotion recognition method based on width learning
CN110958899A (en) * 2017-07-24 2020-04-03 梅德律动公司 Enhanced music for repetitive athletic activity

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102270210A (en) * 2010-12-30 2011-12-07 上海大学 MP3 audio attribute discretization method based on heterogeneity rule
CN110353685A (en) * 2012-03-29 2019-10-22 昆士兰大学 For handling the method and apparatus of patient's sound
CN102610234A (en) * 2012-04-09 2012-07-25 河海大学 Method for selectively mapping signal complexity and code rate
CN103412646A (en) * 2013-08-07 2013-11-27 南京师范大学 Emotional music recommendation method based on brain-computer interaction
US20150297109A1 (en) * 2014-04-22 2015-10-22 Interaxon Inc. System and method for associating music with brain-state data
CN106648058A (en) * 2016-10-10 2017-05-10 珠海格力电器股份有限公司 Song switching method and device
CN110958899A (en) * 2017-07-24 2020-04-03 梅德律动公司 Enhanced music for repetitive athletic activity
CN109190570A (en) * 2018-09-11 2019-01-11 河南工业大学 A kind of brain electricity emotion identification method based on wavelet transform and multi-scale entropy
CN109871831A (en) * 2019-03-18 2019-06-11 太原理工大学 A kind of emotion identification method and system
CN110946576A (en) * 2019-12-31 2020-04-03 西安科技大学 Visual evoked potential emotion recognition method based on width learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
牛滨: "基于MFCC和GMM的个性音乐推荐模型", 《北京理工大学学报》 *
马勇: "脑电信号驱动的个性化情绪音乐播放系统算法研究及初步实现", 《生物医学工程学杂志》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114139572A (en) * 2021-10-29 2022-03-04 杭州电子科技大学 Electroencephalogram emotion recognition method based on enhanced symmetric positive definite matrix
CN114384998A (en) * 2021-11-12 2022-04-22 南京邮电大学 Intelligent emotion state recognition and adjustment method based on electroencephalogram signals
CN113974657A (en) * 2021-12-27 2022-01-28 深圳市心流科技有限公司 Training method, device and equipment based on electroencephalogram signals and storage medium
CN114861274A (en) * 2022-05-10 2022-08-05 合肥工业大学 Real-time interactive space element optimization method based on EEG signal
CN114999611A (en) * 2022-07-29 2022-09-02 支付宝(杭州)信息技术有限公司 Model training and information recommendation method and device
CN114999611B (en) * 2022-07-29 2022-12-20 支付宝(杭州)信息技术有限公司 Model training and information recommendation method and device
CN116825060A (en) * 2023-08-31 2023-09-29 小舟科技有限公司 AI generation music optimization method based on BCI emotion feedback and related device
CN116825060B (en) * 2023-08-31 2023-10-27 小舟科技有限公司 AI generation music optimization method based on BCI emotion feedback and related device

Similar Documents

Publication Publication Date Title
CN112754502A (en) Automatic music switching method based on electroencephalogram signals
CN112656427B (en) Electroencephalogram emotion recognition method based on dimension model
Mert et al. Emotion recognition based on time–frequency distribution of EEG signals using multivariate synchrosqueezing transform
CN107361766B (en) Emotion electroencephalogram signal identification method based on EMD domain multi-dimensional information
Hsu et al. Wavelet-based fractal features with active segment selection: Application to single-trial EEG data
CN114533086B (en) Motor imagery brain electrolysis code method based on airspace characteristic time-frequency transformation
Tsai et al. Blind monaural source separation on heart and lung sounds based on periodic-coded deep autoencoder
CN112353391A (en) Electroencephalogram signal-based method and device for recognizing sound quality in automobile
Hou et al. Distinguishing different emotions evoked by music via electroencephalographic signals
CN103412646A (en) Emotional music recommendation method based on brain-computer interaction
CN112603332A (en) Emotion cognition method based on electroencephalogram signal characteristic analysis
Doulah et al. Neuromuscular disease classification based on mel frequency cepstrum of motor unit action potential
CN110942103A (en) Training method of classifier and computer-readable storage medium
CN108042145A (en) Emotional state recognition method and system and emotional state recognition device
CN111310570A (en) Electroencephalogram signal emotion recognition method and system based on VMD and WPD
Mijić et al. MMOD-COG: A database for multimodal cognitive load classification
CN115770044B (en) Emotion recognition method and device based on electroencephalogram phase amplitude coupling network
CN115211858A (en) Emotion recognition method and system based on deep learning and storable medium
Ren et al. MUAP extraction and classification based on wavelet transform and ICA for EMG decomposition
CN115414051A (en) Emotion classification and recognition method of electroencephalogram signal self-adaptive window
Zhang et al. Decoding olfactory EEG signals for different odor stimuli identification using wavelet-spatial domain feature
Pandey et al. Music identification using brain responses to initial snippets
CN115227243A (en) Automatic retrieval background music BCI system for judging brain fatigue and emotion
Zhang et al. Four-classes human emotion recognition via entropy characteristic and random Forest
Islam et al. Probability mapping based artifact detection and wavelet denoising based artifact removal from scalp EEG for BCI applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210507