default search action
Emily Mower Provost
Person information
- affiliation: University of Michigan, Ann Arbor, USA
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j22]Mohammad Soleymani, Shiro Kumano, Emily Mower Provost, Nadia Bianchi-Berthouze, Akane Sano, Kenji Suzuki:
Guest Editorial Best of ACII 2021. IEEE Trans. Affect. Comput. 15(2): 376-379 (2024) - [j21]Amrit Romana, Kazuhito Koishida, Emily Mower Provost:
Automatic Disfluency Detection From Untranscribed Speech. IEEE ACM Trans. Audio Speech Lang. Process. 32: 4727-4740 (2024) - [i28]Matthew Perez, Aneesha Sampath, Minxue Niu, Emily Mower Provost:
Beyond Binary: Multiclass Paraphasia Detection with Generative Pretrained Transformers and End-to-End Models. CoRR abs/2407.11345 (2024) - [i27]James Tavernor, Yara El-Tawil, Emily Mower Provost:
The Whole Is Bigger Than the Sum of Its Parts: Modeling Individual Annotators to Capture Emotional Variability. CoRR abs/2408.11956 (2024) - [i26]Tao Lu, Muzhe Wu, Xinyi Lu, Siyuan Xu, Shuyu Zhan, Anuj Tambwekar, Emily Mower Provost:
Why Antiwork: A RoBERTa-Based System for Work-Related Stress Identification and Leading Factor Analysis. CoRR abs/2408.13473 (2024) - [i25]Minxue Niu, Mimansa Jaiswal, Emily Mower Provost:
From Text to Emotion: Unveiling the Emotion Annotation Capabilities of LLMs. CoRR abs/2408.17026 (2024) - 2023
- [j20]Kris Johnson Ferreira, Emily Mower:
Demand Learning and Pricing for Varying Assortments. Manuf. Serv. Oper. Manag. 25(4): 1227-1244 (2023) - [j19]Chi-Chun Lee, Theodora Chaspari, Emily Mower Provost, Shrikanth S. Narayanan:
An Engineering View on Emotions and Speech: From Analysis and Predictive Models to Responsible Human-Centered Applications. Proc. IEEE 111(10): 1142-1158 (2023) - [j18]Zakaria Aldeneh, Emily Mower Provost:
You're Not You When You're Angry: Robust Emotion Features Emerge by Recognizing Speakers. IEEE Trans. Affect. Comput. 14(2): 1351-1362 (2023) - [c72]James Tavernor, Matthew Perez, Emily Mower Provost:
Episodic Memory For Domain-Adaptable, Robust Speech Emotion Recognition. INTERSPEECH 2023: 656-660 - [c71]Minxue Niu, Amrit Romana, Mimansa Jaiswal, Melvin G. McInnis, Emily Mower Provost:
Capturing Mismatch between Textual and Acoustic Emotion Expressions for Mood Identification in Bipolar Disorder. INTERSPEECH 2023: 1718-1722 - [i24]Amrit Romana, Kazuhito Koishida, Emily Mower Provost:
Automatic Disfluency Detection from Untranscribed Speech. CoRR abs/2311.00867 (2023) - [i23]Matthew Perez, Duc Le, Amrit Romana, Elise Jones, Keli Licata, Emily Mower Provost:
Seq2seq for Automatic Paraphasia Detection in Aphasic Speech. CoRR abs/2312.10518 (2023) - 2022
- [c70]Matthew Perez, Mimansa Jaiswal, Minxue Niu, Cristina Gorrostieta, Matthew Roddy, Kye Taylor, Reza Lotfian, John Kane, Emily Mower Provost:
Mind the gap: On the value of silence representations to lexical-based speech emotion recognition. INTERSPEECH 2022: 156-160 - [c69]Amrit Romana, Minxue Niu, Matthew Perez, Angela Roberts, Emily Mower Provost:
Enabling Off-the-Shelf Disfluency Detection and Categorization for Pathological Speech. INTERSPEECH 2022: 1916-1920 - 2021
- [j17]Brian Stasak, Julien Epps, Heather T. Schatten, Ivan W. Miller, Emily Mower Provost, Michael F. Armey:
Read speech voice quality and disfluency in individuals with recent suicidal ideation or suicide attempt. Speech Commun. 132: 10-20 (2021) - [j16]John Gideon, Melvin G. McInnis, Emily Mower Provost:
Improving Cross-Corpus Speech Emotion Recognition with Adversarial Discriminative Domain Generalization (ADDoG). IEEE Trans. Affect. Comput. 12(4): 1055-1068 (2021) - [j15]Soheil Khorram, Melvin G. McInnis, Emily Mower Provost:
Jointly Aligning and Predicting Continuous Emotion Annotations. IEEE Trans. Affect. Comput. 12(4): 1069-1083 (2021) - [j14]Laura Biester, Katie Matton, Janarthanan Rajendran, Emily Mower Provost, Rada Mihalcea:
Understanding the Impact of COVID-19 on Online Mental Health Forums. ACM Trans. Manag. Inf. Syst. 12(4): 31:1-31:28 (2021) - [c68]Alex Wilf, Emily Mower Provost:
Towards Noise Robust Speech Emotion Recognition Using Dynamic Layer Customization. ACII 2021: 1-8 - [c67]Matthew Perez, Amrit Romana, Angela Roberts, Noelle Carlozzi, Jennifer Ann Miner, Praveen Dayalu, Emily Mower Provost:
Articulatory Coordination for Speech Motor Tracking in Huntington Disease. Interspeech 2021: 1409-1413 - [c66]Amrit Romana, John Bandon, Matthew Perez, Stephanie Gutierrez, Richard Richter, Angela Roberts, Emily Mower Provost:
Automatically Detecting Errors and Disfluencies in Read Speech to Predict Cognitive Impairment in People with Parkinson's Disease. Interspeech 2021: 1907-1911 - [c65]Zakaria Aldeneh, Matthew Perez, Emily Mower Provost:
Learning Paralinguistic Features from Audiobooks through Style Voice Conversion. NAACL-HLT 2021: 4736-4745 - [i22]Mimansa Jaiswal, Emily Mower Provost:
Why Should I Trust a Model is Private? Using Shifts in Model Explanation for Evaluating Privacy-Preserving Emotion Recognition Model. CoRR abs/2104.08792 (2021) - [i21]Mimansa Jaiswal, Emily Mower Provost:
Best Practices for Noise-Based Augmentation to Improve the Performance of Emotion Recognition "In the Wild". CoRR abs/2104.08806 (2021) - [i20]Lance Ying, Amrit Romana, Emily Mower Provost:
Accounting for Variations in Speech Emotion Recognition with Nonparametric Hierarchical Neural Network. CoRR abs/2109.04316 (2021) - [i19]Matthew Perez, Amrit Romana, Angela Roberts, Noelle Carlozzi, Jennifer Ann Miner, Praveen Dayalu, Emily Mower Provost:
Articulatory Coordination for Speech Motor Tracking in Huntington Disease. CoRR abs/2109.13815 (2021) - 2020
- [c64]Mimansa Jaiswal, Emily Mower Provost:
Privacy Enhanced Multimodal Neural Representations for Emotion Recognition. AAAI 2020: 7985-7993 - [c63]Laura Biester, Katie Matton, Janarthanan Rajendran, Emily Mower Provost, Rada Mihalcea:
Quantifying the Effects of COVID-19 on Mental Health Support Forums. NLP4COVID@EMNLP 2020 - [c62]Amrit Romana, John Bandon, Noelle Carlozzi, Angela Roberts, Emily Mower Provost:
Classification of Manifest Huntington Disease Using Vowel Distortion Measures. INTERSPEECH 2020: 4966-4970 - [c61]Matthew Perez, Zakaria Aldeneh, Emily Mower Provost:
Aphasic Speech Recognition Using a Mixture of Speech Intelligibility Experts. INTERSPEECH 2020: 4986-4990 - [c60]Mimansa Jaiswal, Cristian-Paul Bara, Yuanhang Luo, Mihai Burzo, Rada Mihalcea, Emily Mower Provost:
MuSE: a Multimodal Dataset of Stressed Emotion. LREC 2020: 1499-1510 - [i18]Matthew Perez, Wenyu Jin, Duc Le, Noelle Carlozzi, Praveen Dayalu, Angela Roberts, Emily Mower Provost:
Classification of Huntington Disease using Acoustic and Lexical Features. CoRR abs/2008.03367 (2020) - [i17]Matthew Perez, Zakaria Aldeneh, Emily Mower Provost:
Aphasic Speech Recognition using a Mixture of Speech Intelligibility Experts. CoRR abs/2008.10788 (2020) - [i16]Laura Biester, Katie Matton, Janarthanan Rajendran, Emily Mower Provost, Rada Mihalcea:
Quantifying the Effects of COVID-19 on Mental Health Support Forums. CoRR abs/2009.04008 (2020) - [i15]Amrit Romana, John Bandon, Noelle Carlozzi, Angela Roberts, Emily Mower Provost:
Classification of Manifest Huntington Disease using Vowel Distortion Measures. CoRR abs/2010.08503 (2020) - [i14]Alex Wilf, Emily Mower Provost:
Dynamic Layer Customization for Noise Robust Speech Emotion Recognition in Heterogeneous Condition Training. CoRR abs/2010.11226 (2020)
2010 – 2019
- 2019
- [j13]Biqiao Zhang, Emily Mower Provost, Georg Essl:
Cross-Corpus Acoustic Emotion Recognition with Multi-Task Learning: Seeking Common Ground While Preserving Differences. IEEE Trans. Affect. Comput. 10(1): 85-99 (2019) - [j12]Yelin Kim, Emily Mower Provost:
ISLA: Temporal Segmentation and Labeling for Audio-Visual Emotion Recognition. IEEE Trans. Affect. Comput. 10(2): 196-208 (2019) - [c59]Biqiao Zhang, Yuqing Kong, Georg Essl, Emily Mower Provost:
f-Similarity Preservation Loss for Soft Labels: A Demonstration on Cross-Corpus Speech Emotion Recognition. AAAI 2019: 5725-5732 - [c58]Soheil Khorram, Melvin G. McInnis, Emily Mower Provost:
Trainable Time Warping: Aligning Time-series in the Continuous-time Domain. ICASSP 2019: 3502-3506 - [c57]Biqiao Zhang, Soheil Khorram, Emily Mower Provost:
Exploiting Acoustic and Lexical Properties of Phonemes to Recognize Valence from Speech. ICASSP 2019: 5871-5875 - [c56]Mimansa Jaiswal, Zakaria Aldeneh, Cristian-Paul Bara, Yuanhang Luo, Mihai Burzo, Rada Mihalcea, Emily Mower Provost:
Muse-ing on the Impact of Utterance Ordering on Crowdsourced Emotion Annotations. ICASSP 2019: 7415-7419 - [c55]Mimansa Jaiswal, Zakaria Aldeneh, Emily Mower Provost:
Controlling for Confounders in Multimodal Emotion Classification via Adversarial Learning. ICMI 2019: 174-184 - [c54]Katie Matton, Melvin G. McInnis, Emily Mower Provost:
Into the Wild: Transitioning from Recognizing Mood in Clinical Interactions to Personal Conversations for Individuals with Bipolar Disorder. INTERSPEECH 2019: 1438-1442 - [c53]Zakaria Aldeneh, Mimansa Jaiswal, Michael Picheny, Melvin G. McInnis, Emily Mower Provost:
Identifying Mood Episodes Using Dialogue Features from Clinical Interviews. INTERSPEECH 2019: 1926-1930 - [c52]John Gideon, Heather T. Schatten, Melvin G. McInnis, Emily Mower Provost:
Emotion Recognition from Natural Phone Conversations in Individuals with and without Recent Suicidal Ideation. INTERSPEECH 2019: 3282-3286 - [i13]Soheil Khorram, Melvin G. McInnis, Emily Mower Provost:
Trainable Time Warping: Aligning Time-Series in the Continuous-Time Domain. CoRR abs/1903.09245 (2019) - [i12]Mimansa Jaiswal, Zakaria Aldeneh, Cristian-Paul Bara, Yuanhang Luo, Mihai Burzo, Rada Mihalcea, Emily Mower Provost:
MuSE-ing on the Impact of Utterance Ordering On Crowdsourced Emotion Annotations. CoRR abs/1903.11672 (2019) - [i11]John Gideon, Melvin G. McInnis, Emily Mower Provost:
Barking up the Right Tree: Improving Cross-Corpus Speech Emotion Recognition with Adversarial Discriminative Domain Generalization (ADDoG). CoRR abs/1903.12094 (2019) - [i10]Soheil Khorram, Melvin G. McInnis, Emily Mower Provost:
Jointly Aligning and Predicting Continuous Emotion Annotations. CoRR abs/1907.03050 (2019) - [i9]Mimansa Jaiswal, Zakaria Aldeneh, Emily Mower Provost:
Controlling for Confounders in Multimodal Emotion Classification via Adversarial Learning. CoRR abs/1908.08979 (2019) - [i8]Vidhyasaharan Sethu, Emily Mower Provost, Julien Epps, Carlos Busso, Nicholas Cummins, Shrikanth S. Narayanan:
The Ambiguous World of Emotion Representation. CoRR abs/1909.00360 (2019) - [i7]John Gideon, Katie Matton, Steve Anderau, Melvin G. McInnis, Emily Mower Provost:
When to Intervene: Detecting Abnormal Mood using Everyday Smartphone Conversations. CoRR abs/1909.11248 (2019) - [i6]Zakaria Aldeneh, Mimansa Jaiswal, Michael Picheny, Melvin G. McInnis, Emily Mower Provost:
Identifying Mood Episodes Using Dialogue Features from Clinical Interviews. CoRR abs/1910.05115 (2019) - [i5]Mimansa Jaiswal, Emily Mower Provost:
Privacy Enhanced Multimodal Neural Representations for Emotion Recognition. CoRR abs/1910.13212 (2019) - 2018
- [j11]Duc Le, Keli Licata, Emily Mower Provost:
Automatic quantitative analysis of spontaneous aphasic speech. Speech Commun. 100: 1-12 (2018) - [c51]Zakaria Aldeneh, Dimitrios Dimitriadis, Emily Mower Provost:
Improving End-of-Turn Detection in Spoken Dialogues by Detecting Speaker Intentions as a Secondary Task. ICASSP 2018: 6159-6163 - [c50]Matthew Perez, Wenyu Jin, Duc Le, Noelle Carlozzi, Praveen Dayalu, Angela Roberts, Emily Mower Provost:
Classification of Huntington Disease Using Acoustic and Lexical Features. INTERSPEECH 2018: 1898-1902 - [c49]Soheil Khorram, Mimansa Jaiswal, John Gideon, Melvin G. McInnis, Emily Mower Provost:
The PRIORI Emotion Dataset: Linking Mood to Emotion Detected In-the-Wild. INTERSPEECH 2018: 1903-1907 - [e1]Sidney K. D'Mello, Panayiotis G. Georgiou, Stefan Scherer, Emily Mower Provost, Mohammad Soleymani, Marcelo Worsley:
Proceedings of the 2018 on International Conference on Multimodal Interaction, ICMI 2018, Boulder, CO, USA, October 16-20, 2018. ACM 2018 [contents] - [i4]Zakaria Aldeneh, Dimitrios Dimitriadis, Emily Mower Provost:
Improving End-of-turn Detection in Spoken Dialogues by Detecting Speaker Intentions as a Secondary Task. CoRR abs/1805.06511 (2018) - [i3]Soheil Khorram, Mimansa Jaiswal, John Gideon, Melvin G. McInnis, Emily Mower Provost:
The PRIORI Emotion Dataset: Linking Mood to Emotion Detected In-the-Wild. CoRR abs/1806.10658 (2018) - 2017
- [j10]Carlos Busso, Srinivas Parthasarathy, Alec Burmania, Mohammed Abdel-Wahab, Najmeh Sadoughi, Emily Mower Provost:
MSP-IMPROV: An Acted Corpus of Dyadic Interactions to Study Emotion Perception. IEEE Trans. Affect. Comput. 8(1): 67-80 (2017) - [c48]Zakaria Aldeneh, Emily Mower Provost:
Using regional saliency for speech emotion recognition. ICASSP 2017: 2741-2745 - [c47]Biqiao Zhang, Georg Essl, Emily Mower Provost:
Predicting the distribution of emotion perception: capturing inter-rater variability. ICMI 2017: 51-59 - [c46]Zakaria Aldeneh, Soheil Khorram, Dimitrios Dimitriadis, Emily Mower Provost:
Pooling acoustic and lexical features for the prediction of valence. ICMI 2017: 68-72 - [c45]Duc Le, Keli Licata, Emily Mower Provost:
Automatic Paraphasia Detection from Aphasic Speech: A Preliminary Study. INTERSPEECH 2017: 294-298 - [c44]John Gideon, Soheil Khorram, Zakaria Aldeneh, Dimitrios Dimitriadis, Emily Mower Provost:
Progressive Neural Networks for Transfer Learning in Emotion Recognition. INTERSPEECH 2017: 1098-1102 - [c43]Duc Le, Zakaria Aldeneh, Emily Mower Provost:
Discretized Continuous Speech Emotion Recognition with Multi-Task Deep Recurrent Neural Network. INTERSPEECH 2017: 1108-1112 - [c42]Soheil Khorram, Zakaria Aldeneh, Dimitrios Dimitriadis, Melvin G. McInnis, Emily Mower Provost:
Capturing Long-Term Temporal Dependencies with Convolutional Networks for Continuous Emotion Recognition. INTERSPEECH 2017: 1253-1257 - [i2]John Gideon, Soheil Khorram, Zakaria Aldeneh, Dimitrios Dimitriadis, Emily Mower Provost:
Progressive Neural Networks for Transfer Learning in Emotion Recognition. CoRR abs/1706.03256 (2017) - [i1]Soheil Khorram, Zakaria Aldeneh, Dimitrios Dimitriadis, Melvin G. McInnis, Emily Mower Provost:
Capturing Long-term Temporal Dependencies with Convolutional Networks for Continuous Emotion Recognition. CoRR abs/1708.07050 (2017) - 2016
- [j9]Duc Le, Keli Licata, Carol Persad, Emily Mower Provost:
Automatic Assessment of Speech Intelligibility for Individuals With Aphasia. IEEE ACM Trans. Audio Speech Lang. Process. 24(11): 2187-2199 (2016) - [c41]John Gideon, Emily Mower Provost, Melvin G. McInnis:
Mood state prediction from speech of varying acoustic quality for individuals with bipolar disorder. ICASSP 2016: 2359-2363 - [c40]Biqiao Zhang, Emily Mower Provost, Georg Essl:
Cross-corpus acoustic emotion recognition from singing and speaking: A multi-task learning approach. ICASSP 2016: 5805-5809 - [c39]Yelin Kim, Emily Mower Provost:
Emotion spotting: discovering regions of evidence in audio-visual emotion expressions. ICMI 2016: 92-99 - [c38]Biqiao Zhang, Georg Essl, Emily Mower Provost:
Automatic recognition of self-reported and perceived emotion: does joint modeling help? ICMI 2016: 217-224 - [c37]John Gideon, Biqiao Zhang, Zakaria Aldeneh, Yelin Kim, Soheil Khorram, Duc Le, Emily Mower Provost:
Wild wild emotion: a multimodal ensemble approach. ICMI 2016: 501-505 - [c36]Soheil Khorram, John Gideon, Melvin G. McInnis, Emily Mower Provost:
Recognition of Depression in Bipolar Disorder: Leveraging Cohort and Person-Specific Knowledge. INTERSPEECH 2016: 1215-1219 - [c35]Rebecca Bates, Eric Fosler-Lussier, Florian Metze, Martha A. Larson, Gina-Anne Levow, Emily Mower Provost:
Experiences with Shared Resources for Research and Education in Speech and Language Processing. INTERSPEECH 2016: 1627-1631 - [c34]Duc Le, Emily Mower Provost:
Improving Automatic Recognition of Aphasic Speech with AphasiaBank. INTERSPEECH 2016: 2681-2685 - 2015
- [j8]Emily Mower Provost, Yuan Shangguan, Carlos Busso:
UMEME: University of Michigan Emotional McGurk Effect Data Set. IEEE Trans. Affect. Comput. 6(4): 395-409 (2015) - [j7]Yelin Kim, Emily Mower Provost:
Emotion Recognition During Speech Using Dynamics of Multiple Regions of the Face. ACM Trans. Multim. Comput. Commun. Appl. 12(1s): 25:1-25:23 (2015) - [c33]Biqiao Zhang, Emily Mower Provost, Robert Swedberg, Georg Essl:
Predicting Emotion Perception Across Domains: A Study of Singing and Speaking. AAAI 2015: 1328-1335 - [c32]Biqiao Zhang, Georg Essl, Emily Mower Provost:
Recognizing emotion from singing and speaking using shared models. ACII 2015: 139-145 - [c31]Duc Le, Emily Mower Provost:
Data selection for acoustic emotion recognition: Analyzing and comparing utterance and sub-utterance selection strategies. ACII 2015: 146-152 - [c30]Yuan Shangguan, Emily Mower Provost:
EmoShapelets: Capturing local dynamics of audio-visual affective speech. ACII 2015: 229-235 - [c29]Yelin Kim, Emily Mower Provost:
Leveraging inter-rater agreement for audio-visual emotion recognition. ACII 2015: 553-559 - [c28]Yelin Kim, Jixu Chen, Ming-Ching Chang, Xin Wang, Emily Mower Provost, Siwei Lyu:
Modeling transition patterns between events for temporal human action segmentation and classification. FG 2015: 1-8 - 2014
- [c27]Duc Le, Keli Licata, Elizabeth Mercado, Carol Persad, Emily Mower Provost:
Automatic analysis of speech quality for aphasia treatment. ICASSP 2014: 4853-4857 - [c26]Zahi N. Karam, Emily Mower Provost, Satinder Singh, Jennifer Montgomery, Christopher Archer, Gloria Harrington, Melvin G. McInnis:
Ecologically valid long-term mood monitoring of individuals with bipolar disorder using speech. ICASSP 2014: 4858-4862 - [c25]Duc Le, Emily Mower Provost:
Modeling pronunciation, rhythm, and intonation for automatic assessment of speech quality in aphasia rehabilitation. INTERSPEECH 2014: 1563-1567 - [c24]Yelin Kim, Emily Mower Provost:
Say Cheese vs. Smile: Reducing Speech-Related Variability for Facial Emotion Recognition. ACM Multimedia 2014: 27-36 - 2013
- [c23]Duc Le, Emily Mower Provost:
Emotion recognition from spontaneous speech using Hidden Markov models with deep belief networks. ASRU 2013: 216-221 - [c22]Yelin Kim, Emily Mower Provost:
Emotion classification via utterance-level dynamics: A pattern-based approach to characterizing affective expressions. ICASSP 2013: 3677-3681 - [c21]Emily Mower Provost:
Identifying salient sub-utterance emotion dynamics using flexible units and estimates of affective flow. ICASSP 2013: 3682-3686 - [c20]Yelin Kim, Honglak Lee, Emily Mower Provost:
Deep learning for robust feature generation in audiovisual emotion recognition. ICASSP 2013: 3687-3691 - [c19]Emily Mower Provost, Irene Zhu, Shrikanth S. Narayanan:
Using emotional noise to uncloud audio-visual emotion perceptual evaluation. ICME 2013: 1-6 - [c18]Theodora Chaspari, Emily Mower Provost, Shrikanth S. Narayanan:
Analyzing the structure of parent-moderated narratives from children with ASD using an entity-based approach. INTERSPEECH 2013: 2430-2434 - 2012
- [c17]Emily Mower Provost, Shrikanth S. Narayanan:
Simplifying emotion classification through emotion distillation. APSIPA 2012: 1-4 - [c16]Theodora Chaspari, Emily Mower Provost, Athanasios Katsamanis, Shrikanth S. Narayanan:
An acoustic analysis of shared enjoyment in ECA interactions of children with autism. ICASSP 2012: 4485-4488 - 2011
- [j6]Chi-Chun Lee, Emily Mower, Carlos Busso, Sungbok Lee, Shrikanth S. Narayanan:
Emotion recognition using a hierarchical binary decision tree approach. Speech Commun. 53(9-10): 1162-1171 (2011) - [j5]Emily Mower, Maja J. Mataric, Shrikanth S. Narayanan:
A Framework for Automatic Human Emotion Classification Using Emotion Profiles. IEEE Trans. Speech Audio Process. 19(5): 1057-1070 (2011) - [c15]Amin Atrash, Emily Mower, Khawaja Shams, Maja J. Mataric:
Recognition of Physiological Data for a Motivational Agent. AAAI Spring Symposium: Computational Physiology 2011 - [c14]Emily Mower, Shrikanth S. Narayanan:
A hierarchical static-dynamic framework for emotion classification. ICASSP 2011: 2372-2375 - [c13]Emily Mower, Matthew P. Black, Elisa Flores, Marian E. Williams, Shrikanth S. Narayanan:
Rachel: Design of an emotionally targeted interactive agent for children with autism. ICME 2011: 1-6 - [c12]Emily Mower, Chi-Chun Lee, James Gibson, Theodora Chaspari, Marian E. Williams, Shrikanth S. Narayanan:
Analyzing the Nature of ECA Interactions in Children with Autism. INTERSPEECH 2011: 2989-2993 - 2010
- [j4]Thomas Barkowsky, Sven Bertel, Frank Broz, Vinay K. Chaudhri, Nathan Eagle, Michael R. Genesereth, Harry Halpin, Emily Hamner, Gabe Hoffmann, Christoph Hölscher, Eric Horvitz, Tom Lauwers, Deborah L. McGuinness, Marek P. Michalowski, Emily Mower, Thomas F. Shipley, Kristen Stubbs, Roland Vogl, Mary-Anne Williams:
Reports of the AAAI 2010 Spring Symposia. AI Mag. 31(3): 115-122 (2010) - [c11]Dongrui Wu, Thomas D. Parsons, Emily Mower, Shrikanth S. Narayanan:
Speech emotion estimation in 3D space. ICME 2010: 737-742 - [c10]Emily Mower, Kyu Jeong Han, Sungbok Lee, Shrikanth S. Narayanan:
A cluster-profile representation of emotion using agglomerative hierarchical clustering. INTERSPEECH 2010: 797-800 - [c9]Emily Mower, Maja J. Mataric, Shrikanth S. Narayanan:
Robust representations for out-of-domain emotions using Emotion Profiles. SLT 2010: 25-30
2000 – 2009
- 2009
- [j3]Emily Mower, Maja J. Mataric, Shrikanth S. Narayanan:
Human Perception of Audio-Visual Synthetic Character Emotion Expression in the Presence of Ambiguous and Conflicting Information. IEEE Trans. Multim. 11(5): 843-855 (2009) - [c8]Emily Mower, Angeliki Metallinou, Chi-Chun Lee, Abe Kazemzadeh, Carlos Busso, Sungbok Lee, Shrikanth S. Narayanan:
Interpreting ambiguous emotional expressions. ACII 2009: 1-8 - [c7]Chi-Chun Lee, Emily Mower, Carlos Busso, Sungbok Lee, Shrikanth S. Narayanan:
Emotion recognition using a hierarchical binary decision tree approach. INTERSPEECH 2009: 320-323 - [c6]Emily Mower, Maja J. Mataric, Shrikanth S. Narayanan:
Evaluating evaluators: a case study in understanding the benefits and pitfalls of multi-evaluator modeling. INTERSPEECH 2009: 1583-1586 - 2008
- [j2]Carlos Busso, Murtaza Bulut, Chi-Chun Lee, Abe Kazemzadeh, Emily Mower, Samuel Kim, Jeannette N. Chang, Sungbok Lee, Shrikanth S. Narayanan:
IEMOCAP: interactive emotional dyadic motion capture database. Lang. Resour. Evaluation 42(4): 335-359 (2008) - [c5]Emily Mower, Sungbok Lee, Maja J. Mataric, Shrikanth S. Narayanan:
Human perception of synthetic character emotions in the presence of conflicting and congruent vocal and facial expressions. ICASSP 2008: 2201-2204 - [c4]Emily Mower, Sungbok Lee, Maja J. Mataric, Shrikanth S. Narayanan:
Joint-processing of audio-visual signals in human perception of conflicting synthetic character emotions. ICME 2008: 961-964 - [c3]Emily Mower, Maja J. Mataric, Shrikanth S. Narayanan:
Selection of Emotionally Salient Audio-Visual Features for Modeling Human Evaluations of Synthetic Character Emotion Displays. ISM 2008: 190-195 - 2007
- [j1]Michael Grimm, Kristian Kroschel, Emily Mower, Shrikanth S. Narayanan:
Primitives-based evaluation and estimation of emotions in speech. Speech Commun. 49(10-11): 787-800 (2007) - [c2]Emily Mower, David Feil-Seifer, Maja J. Mataric, Shrikanth S. Narayanan:
Investigating Implicit Cues for User State Estimation in Human-Robot Interaction Using Physiological Measurements. RO-MAN 2007: 1125-1130 - 2006
- [c1]Michael Grimm, Emily Mower Provost, Kristian Kroschel, Shrikanth S. Narayanan:
Combining categorical and primitives-based emotion recognition. EUSIPCO 2006: 1-5
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-11-19 20:48 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint