default search action
Frédéric Elisei
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c44]Gérard Bailly, Romain Legrand, Martin Lenglet, Frédéric Elisei, Maëva Hueber, Olivier Perrotin:
Emotags: Computer-Assisted Verbal Labelling of Expressive Audiovisual Utterances for Expressive Multimodal TTS. LREC/COLING 2024: 5689-5695 - [c43]Frédéric Elisei, Léa Haefflinger, Gérard Bailly:
RoboTrio2: Annotated Interactions of a Teleoperated Robot and Human Dyads for Data-Driven Behavioral Models. HHAI Workshops 2024: 84-92 - [c42]Léa Haefflinger, Frédéric Elisei, Brice Varini, Gérard Bailly:
Probing the Inductive Biases of a Gaze Model for Multi-party Interaction. HRI (Companion) 2024: 507-511 - [i2]Hippolyte Fournier, Sina Alisamir, Safaa Azzakhnini, Hanna Chainay, Olivier Koenig, Isabella Zsoldos, Eléeonore Trân, Gérard Bailly, Frédéric Elisei, Béatrice Bouchot, Brice Varini, Patrick Constant, Joan Fruitet, Franck Tarpin-Bernard, Solange Rossato, François Portet, Fabien Ringeval:
THERADIA WoZ: An Ecological Corpus for Appraisal-based Affect Research in Healthcare. CoRR abs/2405.06728 (2024) - 2023
- [c41]Léa Haefflinger, Frédéric Elisei, Silvain Gerber, Béatrice Bouchot, Jean-Philippe Vigne, Gérard Bailly:
On the Benefit of Independent Control of Head and Eye Movements of a Social Robot for Multiparty Human-Robot Interaction. HCI (1) 2023: 450-466 - [c40]Sanjana Sankar, Denis Beautemps, Frédéric Elisei, Olivier Perrotin, Thomas Hueber:
Investigating the dynamics of hand and lips in French Cued Speech using attention mechanisms and CTC-based decoding. INTERSPEECH 2023: 4978-4982 - [c39]Léa Haefflinger, Frédéric Elisei, Béatrice Bouchot, Brice Varini, Gérard Bailly:
Data-Driven Generation of Eyes and Head Movements of a Social Robot in Multiparty Conversation. ICSR (1) 2023: 191-203 - [i1]Sanjana Sankar, Denis Beautemps, Frédéric Elisei, Olivier Perrotin, Thomas Hueber:
Investigating the dynamics of hand and lips in French Cued Speech using attention mechanisms and CTC-based decoding. CoRR abs/2306.08290 (2023) - 2022
- [c38]Rami Younes, Gérard Bailly, Frédéric Elisei, Damien Pellier:
Automatic Verbal Depiction of a Brick Assembly for a Robot Instructing Humans. SIGDIAL 2022: 159-171 - 2021
- [c37]Franck Tarpin-Bernard, Joan Fruitet, Jean-Philippe Vigne, Patrick Constant, Hanna Chainay, Olivier Koenig, Fabien Ringeval, Béatrice Bouchot, Gérard Bailly, François Portet, Sina Alisamir, Yongxin Zhou, Jean Serre, Vincent Delerue, Hippolyte Fournier, Kévin Berenger, Isabella Zsoldos, Olivier Perrotin, Frédéric Elisei, Martin Lenglet, Charles Puaux, Léo Pacheco, Mélodie Fouillen, Didier Ghenassia:
THERADIA: Digital Therapies Augmented by Artificial Intelligence. AHFE (1) 2021: 478-485 - [c36]Loriane Koelsch, Frédéric Elisei, Ludovic Ferrand, Pierre Chausse, Gérard Bailly, Pascal Huguet:
Impact of Social Presence of Humanoid Robots: Does Competence Matter? ICSR 2021: 729-739
2010 – 2019
- 2018
- [c35]Duc Canh Nguyen, Gérard Bailly, Frédéric Elisei:
Comparing Cascaded LSTM Architectures for Generating Head Motion from Speech in Task-Oriented Dialogs. HCI (3) 2018: 164-175 - 2017
- [j11]Duc Canh Nguyen, Gérard Bailly, Frédéric Elisei:
Learning off-line vs. on-line models of interactive multimodal behaviors with recurrent neural networks. Pattern Recognit. Lett. 100: 29-36 (2017) - 2016
- [j10]Alaeddine Mihoub, Gérard Bailly, Christian Wolf, Frédéric Elisei:
Graphical models for social behavior modeling in face-to face interaction. Pattern Recognit. Lett. 74: 82-89 (2016) - [c34]Duc Canh Nguyen, Gérard Bailly, Frédéric Elisei:
Conducting neuropsychological tests with a humanoid robot: Design and evaluation. CogInfoCom 2016: 337-342 - [c33]Gérard Bailly, Frédéric Elisei, Alexandra Juphard, Olivier Moreaud:
Quantitative Analysis of Backchannels Uttered by an Interviewer During Neuropsychological Tests. INTERSPEECH 2016: 2905-2909 - 2015
- [j9]Alberto Parmiggiani, Marco Randazzo, Marco Maggiali, Giorgio Metta, Frédéric Elisei, Gérard Bailly:
Design and Validation of a Talking Face for the iCub. Int. J. Humanoid Robotics 12(3): 1550026:1-1550026:20 (2015) - [j8]Alaeddine Mihoub, Gérard Bailly, Christian Wolf, Frédéric Elisei:
Learning multimodal behavioral models for face-to-face social interaction. J. Multimodal User Interfaces 9(3): 195-210 (2015) - [c32]Gérard Bailly, Frédéric Elisei, Miquel Sauze:
Beaming the Gaze of a Humanoid Robot. HRI (Extended Abstracts) 2015: 47-48 - [c31]Francois R. Foerster, Gérard Bailly, Frédéric Elisei:
Impact of iris size and eyelids coupling on the estimation of the gaze direction of a robotic talking head by human viewers. Humanoids 2015: 148-153 - 2014
- [c30]Alberto Parmiggiani, Marco Randazzo, Marco Maggiali, Frédéric Elisei, Gérard Bailly, Giorgio Metta:
An articulated talking face for the iCub. Humanoids 2014: 1-6 - 2013
- [c29]Thomas Hueber, Gérard Bailly, Pierre Badin, Frédéric Elisei:
Speaker adaptation of an acoustic-articulatory inversion model using cascaded Gaussian mixture regressions. INTERSPEECH 2013: 2753-2757 - [c28]Thomas Hueber, Gérard Bailly, Pierre Badin, Frédéric Elisei:
Vizart3d - real-time system of visual articulatory feedback. SLaTE 2013 - 2012
- [j7]Jean-David Boucher, Ugo Pattacini, Amélie Lelong, Gérard Bailly, Frédéric Elisei, Sascha Fagel, Peter Ford Dominey, Jocelyne Ventre-Dominey:
I Reach Faster When I See You Look: Gaze Effects in Human-Human and Human-Robot Face-to-Face Cooperation. Frontiers Neurorobotics 6: 3 (2012) - [c27]Thomas Hueber, Atef Ben Youssef, Gérard Bailly, Pierre Badin, Frédéric Elisei:
Cross-speaker Acoustic-to-Articulatory Inversion using Phone-based Trajectory HMM for Pronunciation Training. INTERSPEECH 2012: 783-786 - [c26]Thomas Hueber, Atef Ben Youssef, Pierre Badin, Gérard Bailly, Frédéric Elisei:
Vizart3D : Retour Articulatoire Visuel pour l'Aide à la Prononciation (Vizart3D: Visual Articulatory Feedack for Computer-Assisted Pronunciation Training) [in French]. JEP-TALN-RECITAL 2012: 17-18 - 2010
- [j6]Pierre Badin, Yuliya Tarabalka, Frédéric Elisei, Gérard Bailly:
Can you 'read' tongue movements? Evaluation of the contribution of tongue display to speech understanding. Speech Commun. 52(6): 493-503 (2010) - [j5]Gérard Bailly, Stephan Raidt, Frédéric Elisei:
Gaze, conversational agents and face-to-face communication. Speech Commun. 52(6): 598-612 (2010) - [c25]Sascha Fagel, Gérard Bailly, Frédéric Elisei, Amélie Lelong:
On the importance of eye gaze in a face-to-face collaborative task. AFFINE@MM 2010: 81-86
2000 – 2009
- 2009
- [j4]Gérard Bailly, Oxana Govokhina, Frédéric Elisei, Gaspard Breton:
Lip-Synching Using Speaker-Specific Articulation, Shape and Appearance Models. EURASIP J. Audio Speech Music. Process. 2009 (2009) - 2008
- [c24]Pierre Badin, Frédéric Elisei, Gérard Bailly, Yuliya Tarabalka:
An Audiovisual Talking Head for Augmented Speech Generation: Models and Animations Based on a Real Speaker's Articulatory Data. AMDO 2008: 132-143 - [c23]Gérard Bailly, Antoine Bégault, Frédéric Elisei, Pierre Badin:
Speaking with smile or disgust: data and models. AVSP 2008: 111-114 - [c22]Gérard Bailly, Yu Fang, Frédéric Elisei, Denis Beautemps:
Retargeting cued speech hand gestures for different talking heads and speakers. AVSP 2008: 153-158 - [c21]Barry-John Theobald, Sascha Fagel, Gérard Bailly, Frédéric Elisei:
LIPS2008: visual speech synthesis challenge. INTERSPEECH 2008: 2310-2313 - [c20]Gérard Bailly, Oxana Govokhina, Gaspard Breton, Frédéric Elisei, Christophe Savariaux:
A trainable trajectory formation model TD-HMM parameterized for the LIPS 2008 challenge. INTERSPEECH 2008: 2318-2321 - [c19]Sascha Fagel, Frédéric Elisei, Gérard Bailly:
From 3-d speaker cloning to text-to-audiovisual-speech. INTERSPEECH 2008: 2325 - [c18]Pierre Badin, Yuliya Tarabalka, Frédéric Elisei, Gérard Bailly:
Can you "read tongue movements"? INTERSPEECH 2008: 2635-2638 - 2007
- [c17]Frédéric Elisei, Gérard Bailly, Alix Casari, Stephan Raidt:
Towards eye gaze aware analysis and synthesis of audiovisual speech. AVSP 2007 - [c16]Sascha Fagel, Gérard Bailly, Frédéric Elisei:
Intelligibility of natural and 3d-cloned German speech. AVSP 2007 - [c15]Stephan Raidt, Gérard Bailly, Frédéric Elisei:
Analyzing and modeling gaze during face-to-face interaction. AVSP 2007: 23 - [c14]Stephan Raidt, Gérard Bailly, Frédéric Elisei:
Gaze Patterns during Face-to-Face Interaction. Web Intelligence/IAT Workshops 2007: 338-341 - [c13]Antoine Picot, Gérard Bailly, Frédéric Elisei, Stephan Raidt:
Scrutinizing Natural Scenes: Controlling the Gaze of an Embodied Conversational Agent. IVA 2007: 272-282 - [c12]Stephan Raidt, Gérard Bailly, Frédéric Elisei:
Analyzing Gaze During Face-to-Face Interaction. IVA 2007: 403-404 - 2006
- [c11]Guillaume Gibert, Gérard Bailly, Frédéric Elisei:
Evaluation of a virtual speech cuer. ExLing 2006: 141-144 - [c10]Guillaume Gibert, Gérard Bailly, Frédéric Elisei:
Evaluating a virtual speech cuer. INTERSPEECH 2006 - [c9]Stephan Raidt, Gérard Bailly, Frédéric Elisei:
Does a Virtual Talking Face Generate Proper Multimodal Cues to Draw User's Attention to Points of Interest? LREC 2006: 2544-2549 - [c8]Gérard Bailly, Frédéric Elisei, Stephan Raidt, Alix Casari, Antoine Picot:
Embodied Conversational Agents: Computing and Rendering Realistic Gaze Patterns. PCM 2006: 9-18 - [c7]Aurélie Clodic, Sara Fleury, Rachid Alami, Raja Chatila, Gérard Bailly, Ludovic Brethes, Maxime Cottret, Patrick Danès, Xavier Dollat, Frédéric Elisei, Isabelle Ferrané, Matthieu Herrb, Guillaume Infantes, Christian Lemaire, Frédéric Lerasle, Jérôme Manhes, Patrick Marcoul, Paulo Menezes, Vincent Montreuil:
Rackham: An Interactive Robot-Guide. RO-MAN 2006: 502-509 - 2005
- [c6]Frédéric Elisei, Gérard Bailly, Guillaume Gibert, Rémi Brun:
Capturing data and realistic 3d models for cued speech analysis and audiovisual synthesis. AVSP 2005: 125-130 - [c5]Stephan Raidt, Gérard Bailly, Frédéric Elisei:
Basic components of a face-to-face interaction with a conversational agent: mutual attention and deixis. sOc-EUSAI 2005: 247-252 - 2004
- [j3]Matthias Odisio, Gérard Bailly, Frédéric Elisei:
Tracking talking faces with shape and appearance models. Speech Commun. 44(1-4): 63-82 (2004) - [c4]Guillaume Gibert, Gérard Bailly, Frédéric Elisei, Denis Beautemps, Rémi Brun:
Audiovisual text-to-cued speech synthesis. EUSIPCO 2004: 1007-1010 - [c3]Guillaume Gibert, Gérard Bailly, Frédéric Elisei, Denis Beautemps, Rémi Brun:
Evaluation of a Speech Cuer: From Motion Capture to a Concatenative Text-to-cued Speech System. LREC 2004 - [c2]Guillaume Gibert, Gérard Bailly, Frédéric Elisei:
Audiovisual text-to-cued speech synthesis. SSW 2004: 85-90 - 2003
- [j2]Gérard Bailly, Maxime Berar, Frédéric Elisei, Matthias Odisio:
Audiovisual Speech Synthesis. Int. J. Speech Technol. 6(4): 331-346 (2003) - 2001
- [c1]Frédéric Elisei, Matthias Odisio, Gérard Bailly, Pierre Badin:
Creating and controlling video-realistic talking heads. AVSP 2001: 90-97 - 2000
- [j1]Guillaume Gravier, Francis Van Aeken, Frédéric Elisei:
Résumés de thèse. Ann. des Télécommunications 55(9-10): 553-554 (2000)
1990 – 1999
- 1999
- [b1]Frédéric Elisei:
Clones 3D pour communication audio et vidéo. (3D heads for audio and video communication). Joseph Fourier University, Grenoble, France, 1999
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-11-22 20:38 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint