default search action
AVSP 2007: Hilvarenbeek, The Netherlands
- Jean Vroomen, Marc Swerts, Emiel Krahmer:
Auditory-Visual Speech Processing 2007, AVSP 2007, Hilvarenbeek, The Netherlands, August 31 - September 3, 2007. ISCA 2007
Oral Sessions
- Jeroen J. Stekelenburg, Jean Vroomen:
Neural correlates of multisensory integration of ecologically valid audiovisual events. - Marco Calabresi, Sharon M. Thomas, Tim J. Folkard, Deborah Ann Hall:
Speechreading in context: an ERP study. - Lisette Mol, Emiel Krahmer, Alfons Maes, Marc Swerts:
The communicative import of gestures: evidence from a comparative analysis of human-human and human-machine interactions. - Sascha Fagel, Gérard Bailly, Frédéric Elisei:
Intelligibility of natural and 3d-cloned German speech. - Patrick Lucey, Gerasimos Potamianos, Sridha Sridharan:
An extended pose-invariant lipreading system. - Frédéric Elisei, Gérard Bailly, Alix Casari, Stephan Raidt:
Towards eye gaze aware analysis and synthesis of audiovisual speech. - Edward T. Auer Jr.:
Further modeling of the effects of lexical uniqueness in speechreading: examining individual differences in segmental perception and testing predictions for sentence level performance. - Alexandra Jesse, James M. McQueen:
Visual lexical stress information in audiovisual spoken-word recognition. - Vicky Knowland, Jyrki Tuomainen, Stuart Rosen:
The effects of perceptual load and set on audio-visual speech integration. - Hartmut Traunmüller, Niklas Öhrström:
The auditory and the visual percept evoked by the same audiovisual vowels. - Marc Swerts, Emiel Krahmer:
Acoustic effects of visual beats. - Adriano Vilela Barbosa, Hani C. Yehia, Eric Vatikiotis-Bateson:
MATLAB toolbox for audiovisual speech processing. - Jon Barker, Xu Shao:
Audio-visual speech fragment decoding. - Roland Hu, Robert I. Damper:
Audio-visual person identification on the XM2VTS database. - Valérie Hazan, Anke Sennema:
The impact of visual training on the perception and production of a non-native phonetic contrast. - Rebecca Baier, William J. Idsardi, Jeffrey Lidz:
Two-month-olds are sensitive to lip rounding in dynamic and static speech events. - Jeesun Kim, Chris Davis:
Restoration effects in auditory and visual speech.
Poster Sessions
- Tian Gan, Wolfgang Menzel, Shiqiang Yang:
An audio-visual speech recognition framework based on articulatory features. 1 - Christian Cavé, Aurélie Stroumza, Mireille Bastien-Toniazzo:
The Mcgurk effect in dyslexic and normal-reading children: an experimental study. 2 - Azra Nahid Ali:
Exploring semantic cueing effects using Mcgurk fusion. 3 - Yoshimori Sugano:
Modeling the auditory capture effect in a bimodal synchronous tapping. 4 - Jean Vroomen, Sabine van Linden, Martijn Baart:
Lipread aftereffects in auditory speech perception: measuring aftereffects after a twenty-four hours delay. 5 - Marie-Odile Berger:
Realistic face animation from sparse stereo meshes. 6 - Bertrand Rivet, Laurent Girin, Christine Servière, Dinh-Tuan Pham, Christian Jutten:
Audiovisual speech source separation: a regularization method based on visual voice activity detection. 7 - Chris Davis, Jeesun Kim, Takaaki Kuratate, Johnson Chen, Stelarc, Denis Burnham:
Making a thinking-talking head. 8 - Roxane Bertrand, Gaëlle Ferré, Philippe Blache, Robert Espesser, Stéphane Rauzy:
Backchannels revisited from a multimodal perspective. 9 - Michael Pilling, Sharon M. Thomas:
Temporal factors in the electrophysiological markers of audiovisual speech integration. 10 - Amy Irwin, Sharon M. Thomas, Michael Pilling:
Regional accent familiarity and speechreading performance. 11 - Sharon M. Thomas, Michael Pilling:
Benefits of facial and textual information in understanding of vocoded speech. 12 - Javier Melenchón, Jordi Simó, Germán Cobo, Elisa Martínez:
Objective viseme extraction and audiovisual uncertainty: estimation limits between auditory and visual modes. 13 - Bertrand Rivet, Andrew J. Aubrey, Laurent Girin, Yulia Hicks, Christian Jutten, Jonathon A. Chambers:
Development and comparison of two approaches for visual speech analysis with application to voice activity detection. 14 - Anton Batliner, Christian Hacker, Moritz Kaiser, Hannes Mögele, Elmar Nöth:
Taking into account the user2s focus of attention with the help of audio-visual information: towards less artificial human-machine-communication. 15 - Ben Milner, Ibrahim Almajai:
Noisy audio speech enhancement using Wiener filters derived from visual speech. 16 - Ibrahim Almajai, Ben Milner:
Maximising audio-visual speech correlation. 17 - Shuichi Sakamoto, Akihiro Tanaka, Komi Tsumura, Yôiti Suzuki:
Effect of speed difference between time-expanded speech and talker2s moving image on word or sentence intelligibility. 18 - Anahita Basirat, Marc Sato, Jean-Luc Schwartz:
Audiovisual verbal transformations, as a way to study audiovisual interactions in speech perception. 19 - Marion Dohen, Hélène Loevenbruck:
Auditory-visual perception of acoustically degraded prosodic contrastive focus in French. 20 - Barry-John Theobald, Nicholas Wilkinson:
A real-time speech-driven talking head using active appearance models. 22 - Stephan Raidt, Gérard Bailly, Frédéric Elisei:
Analyzing and modeling gaze during face-to-face interaction. 23 - Valérie Hazan, Anke Sennema:
The impact of visual training on the perception and production of a non-native phonetic contrast. 24 - Slim Ouni, Kaïs Ouni:
Arabic pharyngeals in visual speech. 25 - Stefan Schacht, Oleg Fallman, Dietrich Klakow:
Fast lip tracking for speech/nonspeech detection. 26 - Douglas Brungart, Virginie van Wassenhove, Eugene Brandewie, Griffin D. Romigh:
The effects of temporal acceleration and deceleration on AV speech perception. 27 - David Dean, Patrick Lucey, Sridha Sridharan, Tim Wark:
Weighting and normalisation of synchronous HMMs for audio-visual speech recognition. 28 - Nadia Mana, Fabio Pianesi:
Modelling of emotional facial expressions during speech in synthetic talking heads using a hybrid approach. 29 - Zdenek Krnoul, Milos Zelezný:
Innovations in Czech audio-visual speech synthesis for precise articulation. 30 - Petr Císar, Milos Zelezný, Jan Zelinka, Jana Trojanová:
Development and testing of new combined visual speech parameterization. 31 - Katja Grauwinkel, Sascha Fagel:
Visualization of internal articulator dynamics for use in speech therapy for children with Sigmatismus Interdentalis. 32 - Hans Colonius, Adele Diederich:
A measure of auditory-visual integration efficiency based on fechnerian scaling. 33 - Dawn M. Behne, Yue Wang, Magnus Alm, Ingrid Arntsen, Ragnhild Eg, Ane Valsø:
Changes in audio-visual speech perception during adulthood. 34 - Yue Wang, Dawn M. Behne, Haisheng Jiang, Angela Feehan:
Effect of native language experience on audio-visual perception of English fricatives by Korean and Mandarin natives. 35 - Emilie Troille, Marie-Agnès Cathiard, Christian Abry:
Consequences on bimodal perception of the timing of the consonant and vowel audiovisual flows. 36 - Girija Chetty, Michael Wagner:
Audiovisual speaker identity verification based on cross modal fusion. 37 - Adriano Vilela Barbosa, Hani C. Yehia, Eric Vatikiotis-Bateson:
MATLAB toolbox for audiovisual speech processing. 38 - Akihiro Tanaka, Shuichi Sakamoto, Komi Tsumura, Yôiti Suzuki:
Effects of intermodal timing difference and speed difference on intelligibility of auditory-visual speech in younger and older adults. 39 - Jean-Claude Martin, Christian Jacquemin, Laurent Pointal, Brian F. G. Katz, Christophe d'Alessandro, Aurélien Max, Matthieu Courgeon:
A 3d audio-visual animated agent for expressive conversational question answering. 40 - Eric Vatikiotis-Bateson, Adriano Vilela Barbosa, Cheuk Yi Chow, Martin Oberg, Johanna Tan, Hani C. Yehia:
Audiovisual Lombard speech: reconciling production and perception. 41 - Yuchun Chen, Valérie Hazan:
Developmental factor in auditory-visual speech perception - the Mcgurk effect in Mandarin-Chinese and English speakers. 42
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.