Export Citations
No abstract available.
Recognizing action units for facial expression analysis
Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions, such as happiness, anger, surprise, and fear. Such prototypic expressions, however, occur rather infrequently. Human emotions and intentions are more ...
View synthesis under perspective projection
This paper addresses the issue on generating a 2D view of a 3D object from its other 2D views. Linear Combination method is the typical approach to this problem. However, a 2D view cannot be represented by a linear combination of other 2D views under ...
Cross-language text retrieval by query translation using term re-weighting
In a dictionary-based query translation for cross-language text retrieval, transfer ambiguity is one of main causes of performance deterioration, but this problem has not received significant attention in this field. To resolve transfer ambiguity, this ...
Advances in the robust processing of multimodal speech and pen systems
Multimodal systems have developed rapidly during the past decade, with progress toward building more general and robust systems, as well as more transparent and usable human interfaces. These next-generation multimodal systems aim to improve the ...
Cited By
- Wehbi A, Cherif A and Tadj C Modeling ontology for multimodal interaction in ubiquitous computing systems Proceedings of the 2012 ACM Conference on Ubiquitous Computing, (842-849)
- Wehbi A, Ramdane-Cherif A and Tadj C Designing patterns for multimodal fusion Proceedings of the 12th International Conference on Computer Systems and Technologies, (115-121)
- Wehbi A, Hina M, Zaguia A, Ramdane-Cherif A and Tadj C Patterns architecture for fusion engines Proceedings of the 9th international conference on Toward useful services for elderly and people with disabilities: smart homes and health telematics, (261-265)
Index Terms
- Multimodal interface for human-machine communication
Recommendations
Human-robot collaborative tutoring using multiparty multimodal spoken dialogue
HRI '14: Proceedings of the 2014 ACM/IEEE international conference on Human-robot interactionIn this paper, we describe a project that explores a novel experimental setup towards building a spoken, multi-modally rich, and human-like multiparty tutoring robot. A human-robot interaction setup is designed, and a human-human dialogue corpus is ...
Multimodal human discourse: gesture and speech
Gesture and speech combine to form a rich basis for human conversational interaction. To exploit these modalities in HCI, we need to understand the interplay between them and the way in which they support communication. We propose a framework for the ...