Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- research-articleOctober 2018
Modeling Empathy in Embodied Conversational Agents: Extended Abstract
ICMI '18: Proceedings of the 20th ACM International Conference on Multimodal InteractionPages 546–550https://doi.org/10.1145/3242969.3264977This paper is intended to outline the PhD research that is aimed to model empathy in embodied conversational systems. Our goal is to determine the requirements for implementation of an empathic interactive agent and develop evaluation methods that is ...
- research-articleOctober 2018
Large Vocabulary Continuous Audio-Visual Speech Recognition
ICMI '18: Proceedings of the 20th ACM International Conference on Multimodal InteractionPages 538–541https://doi.org/10.1145/3242969.3264976We like to conversate with other people using both sounds and visuals, as our perception of speech is bimodal. Essentially echoing the same speech structure, we manage to integrate the two modalities and often understand the message better than with the ...
- research-articleOctober 2018
Multimodal and Context-Aware Interaction in Augmented Reality for Active Assistance
ICMI '18: Proceedings of the 20th ACM International Conference on Multimodal InteractionPages 506–510https://doi.org/10.1145/3242969.3264966Augmented reality eyewear devices (e.g. glasses, headsets) are poised to become ubiquitous in a similar way than smartphones, by providing a quicker and more convenient access to information. There is theoretically no limit to their applicative area and ...
- research-articleOctober 2018
!FTL, an Articulation-Invariant Stroke Gesture Recognizer with Controllable Position, Scale, and Rotation Invariances
ICMI '18: Proceedings of the 20th ACM International Conference on Multimodal InteractionPages 125–134https://doi.org/10.1145/3242969.3243032Nearest neighbor classifiers recognize stroke gestures by computing a (dis)similarity between a candidate gesture and a training set based on points, which may require normalization, resampling, and rotation to a reference before processing. To ...
- research-articleOctober 2018
Improving Object Disambiguation from Natural Language using Empirical Models
ICMI '18: Proceedings of the 20th ACM International Conference on Multimodal InteractionPages 477–485https://doi.org/10.1145/3242969.3243025Robots, virtual assistants, and other intelligent agents need to effectively interpret verbal references to environmental objects in order to successfully interact and collaborate with humans in complex tasks. However, object disambiguation can be a ...
- short-paperOctober 2018
Attention-based Audio-Visual Fusion for Robust Automatic Speech Recognition
ICMI '18: Proceedings of the 20th ACM International Conference on Multimodal InteractionPages 111–115https://doi.org/10.1145/3242969.3243014Automatic speech recognition can potentially benefit from the lip motion patterns, complementing acoustic speech to improve the overall recognition performance, particularly in noise. In this paper we propose an audio-visual fusion strategy that goes ...
- research-articleOctober 2018
The Multimodal Dataset of Negative Affect and Aggression: A Validation Study
ICMI '18: Proceedings of the 20th ACM International Conference on Multimodal InteractionPages 376–383https://doi.org/10.1145/3242969.3243013Within the affective computing and social signal processing communities, increasing efforts are being made in order to collect data with genuine (emotional) content. When it comes to negative emotions and even aggression, ethical and privacy related ...
- research-articleOctober 2018
Human, Chameleon or Nodding Dog?
ICMI '18: Proceedings of the 20th ACM International Conference on Multimodal InteractionPages 428–436https://doi.org/10.1145/3242969.3242998Immersive virtual environments (IVEs) present rich possibilities for the experimental study of non-verbal communication. Here, the 'digital chameleon' effect, -which suggests that a virtual speaker (agent) is more persuasive if they mimic their ...
- research-articleOctober 2018
If You Ask Nicely: A Digital Assistant Rebuking Impolite Voice Commands
ICMI '18: Proceedings of the 20th ACM International Conference on Multimodal InteractionPages 95–102https://doi.org/10.1145/3242969.3242995Digital home assistants have an increasing influence on our everyday lives. The media now reports how children adapt the consequential, imperious language style when talking to real people. As a response to this behavior, we considered a digital ...
- research-articleOctober 2018
Floor Apportionment and Mutual Gazes in Native and Second-Language Conversation
ICMI '18: Proceedings of the 20th ACM International Conference on Multimodal InteractionPages 334–341https://doi.org/10.1145/3242969.3242991Quantitative analysis of gazes between a speaker and listeners was conducted from the viewpoint of mutual activities in floor apportionment, with the assumption that mutual gaze plays an important role in coordinating speech interaction. We conducted ...
- research-articleOctober 2018
Analyzing Gaze Behavior and Dialogue Act during Turn-taking for Estimating Empathy Skill Level
ICMI '18: Proceedings of the 20th ACM International Conference on Multimodal InteractionPages 31–39https://doi.org/10.1145/3242969.3242978We explored the gaze behavior towards the end of utterances and dialogue act (DA), i.e., verbal-behavior information indicating the intension of an utterance, during turn-keeping/changing to estimate empathy skill levels in multiparty discussions. This ...
- research-articleOctober 2018
Keep Me in the Loop: Increasing Operator Situation Awareness through a Conversational Multimodal Interface
ICMI '18: Proceedings of the 20th ACM International Conference on Multimodal InteractionPages 384–392https://doi.org/10.1145/3242969.3242974Autonomous systems are designed to carry out activities in remote, hazardous environments without the need for operators to micro-manage them. It is, however, essential that operators maintain situation awareness in order to monitor vehicle status and ...
- research-articleOctober 2018
Hand, Foot or Voice: Alternative Input Modalities for Touchless Interaction in the Medical Domain
ICMI '18: Proceedings of the 20th ACM International Conference on Multimodal InteractionPages 145–153https://doi.org/10.1145/3242969.3242971During medical interventions, direct interaction with medical image data is a cumbersome task for physicians due to the sterile environment. Even though touchless input via hand, foot or voice is possible, these modalities are not available for these ...
- proceedingOctober 2018
ICMI '18: Proceedings of the 20th ACM International Conference on Multimodal Interaction
- Sidney K. D'Mello,
- Panayiotis (Panos) Georgiou,
- Stefan Scherer,
- Emily Mower Provost,
- Mohammad Soleymani,
- Marcelo Worsley
Welcome to the 20th ACM International Conference on Multimodal Interaction (ICMI 2018) held in Boulder CO, October 16 to 20, 2018. Boulder lies at the foothills of the Rocky Mountains and is home to the University of Colorado Boulder, one of the leading ...