Nothing Special   »   [go: up one dir, main page]

Responsive image Responsive image

The SJTU Emotion EEG Dataset (SEED), is a collection of EEG datasets provided by the BCMI laboratory, which is led by Prof. Bao-Liang Lu and Prof. Wei-Long Zheng. The name is inherited from the first version of the dataset, but now we provide not only emotion but also a vigilance dataset. As of December 2023, the cumulative number of applications and research institutions using SEED have reached more than 5800 and 1000, respectively. SEED series are open for the academic community. If you are interested in the datasets, take a look at the download page.

NEWS: SEED-FRA and SEED-GER datasets have been released! For a detailed description of the data files, please see the corresponding description page. The download access can be obtained with a request to the administrator.

NEWS: We have released a new version of SEED, which adds eye movement data for the existing 12 subjects. For a detailed description of the data files, please see the corresponding description page. The download access can be obtained with a request to the administrator.

NEWS: SEED-VLA/VRW dataset has been released! For a detailed description of the data files, please see the corresponding description page. The download access can be obtained with a request to the administrator.

NEWS: SEED-VII dataset has been released! For a detailed description of the data files, please see the corresponding description page. The download access can be obtained with a request to the administrator.

Responsive image


SEED
Responsive image Responsive image Responsive image

The SEED dataset contains EEG and eye movement data of 12 subjects and EEG data of another 3 subjects. Data was collected when they were watching film clips. The film clips are carefully selected to induce different types of emotion, which are positive, negative, and neutral. Click here to know the details about the dataset.



SEED-IV
Responsive image Responsive image Responsive image Responsive image

The SEED-IV is an evolution of the original SEED dataset. The number of categories of emotions changes to four: happy, sad, fear, and neutral. In SEED-IV, we provide not only EEG signals but also eye movement features recorded by SMI eye-tracking glasses, which makes it a well-formed multimodal dataset for emotion recognition. Click here to know the details about the dataset.



SEED-VIG
Responsive image

The SEED-VIG dataset is oriented at exploring the vigilance estimation problem. We built a virtual driving system, in which an enormous screen is placed in front of a real car. Subjects can play a driving game in the car, as if they are driving in the real-world environment. The SEED-VIG dataset is collected when the subjects drive in the system. The vigilance level is labeled with the PERCLOS indicator by the SMI eye-tracking glasses. Click here to know the details about the dataset.



SEED-V
Responsive image Responsive image Responsive image Responsive image Responsive image

The SEED-V is an evolution of the original SEED dataset. The number of categories of emotions changes to five: happy, sad, fear, disgust and neutral. In SEED-V, we provide not only EEG signals but also eye movement features recorded by SMI eye-tracking glasses, which makes it a well-formed multimodal dataset for emotion recognition. Click here to know the details about the dataset.




SEED-VII
Responsive image Responsive image Responsive image Responsive image Responsive image Responsive image Responsive image

The SEED-VII is a final version of the original SEED dataset. The number of categories of emotions changes to seven: happy, sad, fear, disgust, neutral, anger, and surprise. In SEED-VII, we provide not only EEG signals but also eye movement features recorded by Tobbi Pro Fusion eye-tracking devices, which makes it a well-formed multimodal dataset for emotion recognition. Click here to know the details about the dataset.


SEED-FRA
Responsive image Responsive image Responsive image

The SEED-FRA dataset contains EEG and eye movement data of 8 French subjects with positive, negative and neutral emotional labels. Click here to know the details about the dataset.



SEED-GER
Responsive image Responsive image Responsive image

The SEED-GER dataset contains EEG and eye movement data of 8 German subjects with positive, negative and neutral emotional labels. Click here to know the details about the dataset.



SEED-VLA/VRW
Responsive image

We developed SEED-VLA and SEED-VRW datasets for fatigue detection using EEG signals from lab and real-world driving. The lab setup included a simulated driving environment in a black car with a steering wheel and pedals, facing a large screen simulating various driving conditions. In real-world tests, participants in a Benben EV200 used a Logitech steering wheel and pedals, with vehicle speed limited for safety. Experiments aimed at inducing fatigue in controlled and naturalistic settings to assess vigilance through EEG. Click here to know the details about the dataset.

Acknowledgement

This work was supported in part by grants from the National Key Research and Development Program of China (Grant No. 2017YFB1002501), the National Natural Science Foundation of China (Grant No. 61272248 and No. 61673266), the National Basic Research Program of China (Grant No. 2013CB329401), the Science and Technology Commission of Shanghai Municipality (Grant No. 13511500200), the Open Funding Project of National Key Laboratory of Human Factors Engineering (Grant No. HF2012-K-01), the Fundamental Research Funds for the Central Universities, and the European Union Seventh Framework Program (Grant No. 247619).