Abstract
We present a multimodal dataset for the analysis of human affective states. The electroencephalogram (EEG) and peripheral physiological signals of 32 participants were recorded as each watched 40 one-minute long excerpts of music videos. Participants rated each video in terms of the levels of arousal, valence, like/dislike, dominance and familiarity. For 22 of the 32 participants, frontal face video was also recorded. A novel method for stimuli selection was used, utilising retrieval by affective tags from the last.fm website, video highlight detection and an online assessment tool.
The dataset is made publicly available and we encourage other researchers to use it for testing their own affective state estimation methods. The dataset was first presented in the following paper:
- "DEAP: A Database for Emotion Analysis using Physiological Signals (PDF)", S. Koelstra, C. Muehl, M. Soleymani, J.-S. Lee, A. Yazdani, T. Ebrahimi, T. Pun, A. Nijholt, I. Patras, EEE Transactions on Affective Computing, vol. 3, no. 1, pp. 18-31, 2012
How to use
If you are interested in using this dataset, you will have to print, sign and scan an EULA (End User License Agreement) and upload it via the dataset request form. We will then supply you with a username and password to download the data. Please head on over to the downloads page for more details.
Also, please consult the dataset description page for a complete explanation of the dataset.
Credits
First and foremost we'd like to thank the 32 participants in this study for having the patience and goodwill to let us record their data.
This dataset was collected by a crack squad of dedicated researchers:
- Sander Koelstra, Queen Mary University of London, United Kingdom
- Christian Mühl, University of Twente, The Netherlands
- Mohammad Soleymani, University of Geneva, Switzerland
- Ashkan Yazdani, EPFL, Switzerland
- Jong_seok Lee, EPFL, Switzerland
All this would also not have been possible without the expert guidance by our esteemed supervisors:
- Dr. Ioannis Patras, Queen Mary University of London, United Kingdom
- Dr. Anton Nijholt, University of Twente, The Netherlands
- Dr. Thierry Pun, University of Geneva, Switzerland
- Dr. Touradj Ebrahimi, EPFL, Switzerland
Finally, we'd like to thank our funding bodies:
- The European Community's Seventh Framework Program (FP7/2007-2011) under grant agreement no. 216444 (PetaMedia).
- The BrainGain Smart Mix Programme of the Netherlands Ministry of Economic Affairs, the Netherlands Ministry of Education, Culture and Science
- The Swiss National Foundation for Scientific Research and the NCCR Interactive Multimodal Information Management (IM2).
- The authors also thank Sebastian Schmiedeke and Pascal Kelm at the Technische Universität Berlin for performing the shot boundary detection on this dataset.