default search action
AVSP 2010: Hakone, Kanagawa, Japan
- Auditory-Visual Speech Processing, AVSP 2010, Hakone, Kanagawa, Japan, September 30 - October 3, 2010. ISCA 2010
Keynotes
- Tetsunori Kobayashi:
Robot as a multimodal human interface device. 1 - Gergely Csibra:
What do human infants expect when adults communicate to them? 2
Recognition
- Takami Yoshida, Kazuhiro Nakadai:
Audio-visual speech recognition system for a robot. 1-2 - Josef Chaloupka, Jan Nouza:
Audio-visual television broadcast programs processing, transcription, indexing and searching. 1-3 - Shin'ichi Takeuchi, Takashi Hashiba, Satoshi Tamura, Satoru Hayamizu:
Decision fusion by boosting method for multi-modal voice activity detection. 1-4 - Takeshi Saitoh, Ryosuke Konishi:
A study of influence of word lip reading by change of frame rate. 7-1 - Sébastien Picard, Gopal Ananthakrishnan, Preben Wik, Olov Engwall, Sherif M. Abdou:
Detection of specific mispronunciations using audiovisual features. 7-2 - Yuxuan Lan, Barry-John Theobald, Richard W. Harvey, Eng-Jon Ong, Richard Bowden:
Improving visual features for lip-reading. 7-3
Perception - McGurk Effect
- Jean-Luc Schwartz, Kaisa Tiippana, Tobias S. Andersen:
Disentangling unisensory from fusion effects in the attentional modulation of Mcgurk effects: a Bayesian modeling study suggests that fusion is attention-dependent. 2-1 - Olov Engwall:
Is there a mcgurk effect for tongue reading? 2 - Tobias S. Andersen:
The Mcgurk illusion in the oddity task. 2-3
Emotion, Prosody
- Erin Cvejic, Jeesun Kim, Chris Davis:
Abstracting visual prosody across speakers and face areas. 3-1 - Jeesun Kim, Chris Davis:
Emotion perception by eye and ear and halves and wholes. 3-2 - Akihiro Tanaka, Ai Koizumi, Hisato Imai, Saori Hiramatsu, Eriko Hiramoto, Béatrice de Gelder:
Cross-cultural differences in the multisensory perception of emotion. 3
Perception
- Yori Kanekama, Satoko Hisanaga, Kaoru Sekiyama, Narihiro Kodama, Yasuhiro Samejima, Takao Yamada, Eiji Yumoto:
Long-term cochlear implant users have resistance to noise, but short-term users don't. 4-1 - Virginie Attina, Guillaume Gibert, Eric Vatikiotis-Bateson, Denis Burnham:
Production of Mandarin lexical tones: auditory and visual components. 4-2
Recognition, Synthesis (Poster Session)
- Jacob L. Newman, Barry-John Theobald, Stephen J. Cox:
Limitations of visual speech recognition. 1 - Panikos Heracleous, Miki Sato, Carlos Toshinori Ishi, Norihiro Hagita:
Investigating the role of the Lombard reflex in visual and audiovisual speech recognition. 2 - Akira Sasou, Yasuharu Hashimoto, Katsuhiko Sakaue:
Acoustic head gesture recognition and its applications. 3 - Peng Shen, Satoshi Tamura, Satoru Hayamizu:
Evaluation of real-time audio-visual speech recognition. 4 - Carlos Toshinori Ishi, Miki Sato, Norihiro Hagita, Shihong Lao:
Real-time audio-visual voice activity detection for speech recognition in noisy environments. 5 - Satoshi Tamura, Chiyomi Miyajima, Norihide Kitaoka, Takeshi Yamada, Satoru Tsuge, Tetsuya Takiguchi, Kazumasa Yamamoto, Takanobu Nishiura, Masato Nakayama, Yuki Denda, Masakiyo Fujimoto, Shigeki Matsuda, Tetsuji Ogawa, Shingo Kuroiwa, Kazuya Takeda, Satoshi Nakamura:
CENSREC-1-AV: an audio-visual corpus for noisy bimodal speech recognition. 6 - Sien W. Chew, Patrick Lucey, Sridha Sridharan, Clinton Fookes:
Exploring visual features through Gabor representations for facial expression detection. 7 - Asterios Toutios, Utpala Musti, Slim Ouni, Vincent Colotte, Brigitte Wrobel-Dautcourt, Marie-Odile Berger:
Towards a true acoustic-visual speech synthesis. 8 - Takaaki Kuratate, Marcia Riley:
Building speaker-specific lip models for talking heads from 3d face data. 9
Perception - Brain
- Daniel E. Callan:
Brain regions differentially involved with multisensory and visual only speech gesture information. 5-1 - Jun Shinozaki, Kaoru Sekiyama, Nobuo Hiroe, Taku Yoshioka, Masa-aki Sato:
Impact of language on audiovisual speech perception examined by fMRI. 5-2 - Satoko Hisanaga, Kaoru Sekiyama, Tomohiko Igasaki, Nobuki Murayama:
An ERP examination of audiovisual speech perception in Japanese younger and older adults. 5-3
Session 6: Perception - Infants
- Christine Kitamura, Jeesun Kim:
Infants match auditory and visual speech in schematic point-light displays. 6-1 - Catherine T. Best, Christian Kroos, Julia Irwin:
I can see what you said: infant sensitivity to articulator congruency between audio-only and silent-video presentations of native and nonnative consonants. 6-2
Synthesis
- Wesley Mattheyses, Lukas Latacz, Werner Verhelst:
Optimized photorealistic audiovisual speech synthesis using active appearance modeling. 8-1 - Sarah Hilder, Barry-John Theobald, Richard W. Harvey:
In pursuit of visemes. 8-2 - Atef Ben Youssef, Pierre Badin, Gérard Bailly:
Acoustic-to-articulatory inversion in speech based on statistical models. 8-3
Perception, Emotion, Interaction (Poster Session)
- Kaisa Tiippana, Erin Hayes, Riikka Möttönen, Nina Kraus, Mikko Sams:
The Mcgurk effect at various auditory signal-to-noise ratios in american and Finnish listeners. 10 - Olha Nahorna, Frédéric Berthommier, Jean-Luc Schwartz:
Binding and unbinding in audiovisual speech fusion: removing the Mcgurk effect by an incoherent preceding audiovisual context. 11 - Guillaume Gibert, Andrew Fordyce, Catherine J. Stevens:
Role of form and motion information in auditory-visual speech perception of Mcgurk combinations and fusions. 12 - Kasper Eskelund, Jyrki Tuomainen, Tobias S. Andersen:
Speech-specificity of two audiovisual integration effects. 13 - Shogo Ishikawa, Shinya Kiriyama, Yoichi Takebayashi, Shigeyoshi Kitazawa:
The multimodal analysis for understanding child behavior focused on attention-catching. 14 - Kenichi Shibata, Shinya Kiriyama, Tomohiro Haraikawa, Yoichi Takebayashi, Shigeyoshi Kitazawa:
A study of speech interface for living space adapting to user environment by considering scenery situation. 15 - Shiho Miyazawa, Akihiro Tanaka, Shuichi Sakamoto, Takehiko Nishimoto:
Effects of speech-rate conversion on asynchrony perception of audio-visual speech. 16 - Ai Koizumi, Akihiro Tanaka, Hisato Imai, Saori Hiramatsu, Eriko Hiramoto, Takao Sato, Béatrice de Gelder:
The effects of anxiety on the perception of emotion in the face and voice. 17 - Denis Burnham, Sebastian Joeffry, Lauren Rice:
"d-o-e-s-not-c-o-m-p-u-t-e": vowel hyperarticulation in speech to an auditory-visual avatar. 18
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.