default search action
SMM 2019: Vienna, Austria
- Venkata Subramanian Viraraghavan, Alexander Schindler, João P. Cabral, Gauri Deshpande, Sachin Patel:
2019 Workshop on Speech, Music and Mind, SMM 2019, Vienna, Austria, September 14, 2019. ISCA 2019
Forenoon keynote speech
- Shrikanth Narayanan:
Understanding affective expressions and experiences through behavioral machine intelligence.
Detecting mental states from speech
- Carlos Toshinori Ishi, Takayuki Kanda:
Prosodic and voice quality analyses of loud speech: differences of hot anger and far-directed speech. - Lara Gauder, Agustín Gravano, Luciana Ferrer, Pablo Riera, Silvina Brussino:
A protocol for collecting speech data with varying degrees of trust. - Pablo Riera, Luciana Ferrer, Agustín Gravano, Lara Gauder:
No Sample Left Behind: Towards a Comprehensive Evaluation of Speech Emotion Recognition Systems. - Vishnu Vidyadhara Raju Vegesna, Krishna Gurugubelli, Mirishkar Sai Ganesh, Anil Kumar Vuppala:
Towards Feature-space Emotional Speech Adaptation for TDNN based Telugu ASR systems. - Kaajal Gupta, Anzar Zulfiqar, Pushpa Ramu, Tilak Purohit, V. Ramasubramanian:
Detection of emotional states of OCD patients in an exposure-response prevention therapy scenario. - A. K. Punnoose:
New Features for Speech Activity Detection. - João P. Cabral, Alexsandro R. Meireles:
Transformation of voice quality in singing using glottal source features. - Gauri Deshpande, Venkata Subramanian Viraraghavan, Rahul Gavas:
A Successive Difference Feature for Detecting Emotional Valence from Speech.
Afternoon keynote speech
- John Ashley Burgoyne:
Everyday Features for Everyday Listening.
Effect of music and audio on mental states
- Venkata Subramanian Viraraghavan, Rahul Gavas, Hema A. Murthy, R. Aravind:
Visualizing Carnatic music as projectile motion in a uniform gravitational field. - Timothy Greer, Shrikanth Narayanan:
Using Shared Vector Representations of Words and Chords in Music for Genre Classification. - Giorgia Cantisani, Gabriel Trégoat, Slim Essid, Gaël Richard:
MAD-EEG: an EEG dataset for decoding auditory attention to a target instrument in polyphonic music. - Rajat Agarwal, Ravinder Singh, Suvi Saarikallio, Katrina McFerran, Vinoo Alluri:
Mining Mental States using Music Associations. - Svetlana Rudenko, João P. Cabral:
Synaesthesia: How can it be used to enhance the audio-visual perception of music and multisensory design in digitally enhanced environments?
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.