default search action
AVSP 2001: Aalborg, Denmark
- Dominic W. Massaro, Joanna Light, Kristin Geraci:
Auditory-Visual Speech Processing, AVSP 2001, Aalborg, Denmark, September 7-9, 2001. ISCA 2001
Visible Speech for Animation and Speechreading by Humans
- Björn Lidestam, Björn Lyxell:
Speechreading essentials: signal, paralinguistic cues, and skill. 1-6 - Edward T. Auer Jr., Lynne E. Bernstein, Sven L. Mattys:
The influence of the lexicon on visual spocen word recognition. 7-12 - Tara Ellis, Mairéad MacSweeney, Barbara Dodd, Ruth Campbell:
TAS: A new test of adult speechreading - deaf people really can be better speechreaders. 13-17 - Jean-Luc Schwartz, Christophe Savariaux:
Is it easier to lipread one's own speech gestures than those of somebody else? it seems not! 18-23 - Christian Kroos, Saeko Masuda, Takaaki Kuratate, Eric Vatikiotis-Bateson:
Towards the facecoder: dynamic face synthesis based on image motion estimation in speech. 24-29 - Sumedha Kshirsagar, Nadia Magnenat-Thalmann:
Viseme space for realistic speech animation. 30-35
Brain Activation in Auditory Visual Processing
- M. Bohning, Ruth Campbell, Annette Karmiloff-Smith:
Audiovisual speech perception in Williams Syndrome. 36-39 - Edward T. Auer Jr., Lynne E. Bernstein, Manbir Singh:
Comparing cortical activity during the perception of two forms of biological motion for language communication. 40-44 - Daniel E. Callan, Akiko E. Callan, Eric Vatikiotis-Bateson:
Neural areas underlying the processing of visual speech information under conditions of degraded auditory information. 45-49 - Lynne E. Bernstein, Jintao Jiang, Abeer Alwan, Edward T. Auer Jr.:
Similarity structure in visual phonetic perception and optical phonetics. 50-55 - Cécile Colin, Monique Radeau, Paul Deltenre:
The mismatch negativity (MMN) and the McGurk effect. 56-61 - Karen Nicholson, Shari R. Baum, Lola Cuddy, Kevin G. Munhall:
A case of multimodal aprosodia: impaired auditory and visual speech prosody perception in a patient with right hemisphere damage. 62-65
Facial Animation and Visual Speech Synthesis
- I-Chen Lin, Jeng-Sheng Yeh, Ming Ouhyoung:
Extraction of 3D facial motion parameters from mirror-reflected multi-view video for audio-visual synthesis. 66-71 - Catherine Pelachaud, Emanuela Magno Caldognetto, Claudio Zmarich, Piero Cosi:
Modelling an Italian talking head. 72-77 - Barry-John Theobald, J. Andrew Bangham, Iain A. Matthews, Gavin C. Cawley:
Visual speech synthesis using statistical models of shape and appearance. 78-83 - Allan Arb, Steven Gustafson, Timothy R. Anderson, Raymond E. Slyh:
Hidden Markov models for visual speech synthesis with limited data. 84-89 - Frédéric Elisei, Matthias Odisio, Gérard Bailly, Pierre Badin:
Creating and controlling video-realistic talking heads. 90-97 - Shigeo Morishima, Shin Ogata, Satoshi Nakamura:
Multimodal translation. 98-103
Correlates of Auditory and Visual Speech
- Lynne E. Bernstein, Curtis W. Ponton, Edward T. Auer Jr.:
Electrophysiology of unimodal and audiovisual speech perception. 104-109 - Jin Young Kim, Seung Ho Choi, Joohun Lee:
Development of a lip-sync algorithm based on an audio-visual corpus. 110-114 - Roland Goecke, J. Bruce Millar, Alexander Zelinsky, Jordi Robert-Ribes:
Analysis of audio-video correlation in vowels in Australian English. 115-120 - Christel Ekvall, Bertil Lyberg, Michael Randén:
Non-verbal correlates to focal accents in Swedish. 121-126 - Jeesun Kim, Chris Davis:
Visible speech cues and auditory detection of spoken sentences: an effect of degree of correlation between acoustic and visual properties. 127-131 - Ken W. Grant, Steven Greenberg:
Speech intelligibility derived from asynchronous processing of auditory-visual information. 132-137
Auditory Visual Speech Perception by Humans
- Marie-Agnès Cathiard, Jean-Luc Schwartz, Christian Abry:
Asking a naive question about the McGurk effect: Why does audio [b] give more [d] percepts with visual [g] than with visual [d]? 138-142 - M. V. McCotter, T. R. Jordan:
Investigating the role of luminance boundaries in visual and audiovisual speech recognition using line drawn faces. 143-148 - Marta Ortega-Llebaria, Andrew Faulkner, Valérie Hazan:
Auditory-visual L2 speech perception: Effects of visual cues and acoustic-phonetic context for Spanish learners of English. 149-154 - Denis Burnham, Susanna Lau, Helen Tam, Colin Schoknecht:
Visual discrimination of cantonese tone by tonal but non-Cantonese speakers, and by non-tonal language speakers. 155-160 - Debra M. Hardison:
Bimodal word identification: effects of modality, speech style, sentence and phonetic/visual context. 161-166 - Kaisa Tiippana, Mikko Sams, Tobias S. Andersen:
Visual attention influences audiovisual speech perception. 167-171
Auditory Visual Speech Recognition by Humans and by Machine
- Tobias S. Andersen, Kaisa Tiippana, Jouko Lampinen, Mikko Sams:
Modeling of audiovisual speech perception in noise. 172-176 - Gerasimos Potamianos, Chalapathy Neti:
Automatic speechreading of impaired speech. 177-182 - Frédéric Berthommier:
Audio-visual recognition of spectrally reduced speech. 183-188 - Martin Heckmann, Frédéric Berthommier, Kristian Kroschel:
A hybrid ANN/HMM audio-visual speech recognition system. 189-194 - Eric K. Patterson, Sabri Gurbuz, Zekeriya Tufekci, John N. Gowdy:
Noise-based audio-visual fusion for robust speech recognition. 195-198
Poster Presentations (no full papers available)
- Hans-Heinrich Bothe:
LIPPS - A visual telephone for hearing-impaired. 199 - Gemma A. Calvert, Michael J. Brammer, Ruth Campbell:
Cortical substrates of seeing speech: still and moving faces. 199 - Björn Kabisch, Carol Nisch, Eckart R. Straube, Ruth Campbell:
Development of a completely computerized McGurk design under variation of the signal to noise ratio. 199 - Rainer Stiefelhagen, Jie Yang, Alex Waibel:
Estimating focus of attention based on gaze and sound. 200 - Jacek C. Wojdel, Léon J. M. Rothkrantz:
Obtaining person-independent feature space for lip reading. 200 - Michael M. Cohen, Rashid Clark, Dominic W. Massaro:
Animated speech: research progress and applications. 200
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.