Tuyen et al., 2022 - Google Patents
Agree or disagree? generating body gestures from affective contextual cues during dyadic interactionsTuyen et al., 2022
View PDF- Document ID
- 6568555353412513091
- Author
- Tuyen N
- Celiktutan O
- Publication year
- Publication venue
- 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)
External Links
Snippet
Humans naturally produce nonverbal signals such as facial expressions, body movements, hand gestures, and tone of voice, along with words, to communicate their messages, opinions, and feelings. Considering robots are progressively moving out from research …
- 230000003993 interaction 0 title abstract description 50
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/004—Artificial life, i.e. computers simulating life
- G06N3/008—Artificial life, i.e. computers simulating life based on physical entities controlled by simulated intelligence so as to replicate intelligent life forms, e.g. robots replicating pets or humans in their appearance or behavior
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N99/00—Subject matter not provided for in other groups of this subclass
- G06N99/005—Learning machines, i.e. computer in which a programme is changed according to experience gained by the machine itself during a complete run
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/02—Computer systems based on biological models using neural network models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computer systems utilising knowledge based models
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Ferstl et al. | Multi-objective adversarial gesture generation | |
Bhattacharya et al. | Text2gestures: A transformer-based network for generating emotive body gestures for virtual agents | |
Ghorbandaei Pour et al. | Human–robot facial expression reciprocal interaction platform: case studies on children with autism | |
Bhattacharya et al. | Speech2affectivegestures: Synthesizing co-speech gestures with generative adversarial affective expression learning | |
Jonell et al. | Let's face it: Probabilistic multi-modal interlocutor-aware generation of facial gestures in dyadic settings | |
Ahuja et al. | To react or not to react: End-to-end visual pose forecasting for personalized avatar during dyadic conversations | |
Ishi et al. | A speech-driven hand gesture generation method and evaluation in android robots | |
Sadoughi et al. | Speech-driven expressive talking lips with conditional sequential generative adversarial networks | |
Vinciarelli et al. | Social signal processing: state-of-the-art and future perspectives of an emerging domain | |
Zhu et al. | Human motion generation: A survey | |
Chiu et al. | Gesture generation with low-dimensional embeddings | |
Scherer et al. | A generic framework for the inference of user states in human computer interaction: How patterns of low level behavioral cues support complex user states in HCI | |
Tuyen et al. | Agree or disagree? generating body gestures from affective contextual cues during dyadic interactions | |
Feng et al. | Learn2smile: Learning non-verbal interaction through observation | |
Rázuri et al. | Automatic emotion recognition through facial expression analysis in merged images based on an artificial neural network | |
CN114995657B (en) | Multimode fusion natural interaction method, system and medium for intelligent robot | |
Ondras et al. | Audio-driven robot upper-body motion synthesis | |
Paleari et al. | Toward multimodal fusion of affective cues | |
Rebol et al. | Passing a non-verbal turing test: Evaluating gesture animations generated from speech | |
Windle et al. | The UEA Digital Humans entry to the GENEA Challenge 2023 | |
Tuyen et al. | Conditional generative adversarial network for generating communicative robot gestures | |
De Coninck et al. | Non-verbal behavior generation for virtual characters in group conversations | |
Roudposhti et al. | Parameterizing interpersonal behaviour with Laban movement analysis—A Bayesian approach | |
Rach et al. | Emotion recognition based preference modelling in argumentative dialogue systems | |
Ding et al. | Audio-driven laughter behavior controller |