Nothing Special   »   [go: up one dir, main page]

skip to main content
10.5555/2162410guideproceedingsBook PagePublication PagesConference Proceedingsacm-pubtype
COST'09: Proceedings of the Second international conference on Development of Multimodal Interfaces: active Listening and Synchrony
2009 Proceeding
  • Editors:
  • Anna Esposito,
  • Nick Campbell,
  • Carl Vogel,
  • Amir Hussain,
  • Anton Nijholt
Publisher:
  • Springer-Verlag
  • Berlin, Heidelberg
Conference:
Dublin Ireland March 23 - 27, 2009
ISBN:
978-3-642-12396-2
Published:
23 March 2009
Sponsors:
Provincia di Salerno, International Institute for Advanced Scientific Studies "E.R. Caianiello", European COST Action 2102, Second University of Naples, Regione Campania

Reflects downloads up to 13 Nov 2024Bibliometrics
Article
Spacing and orientation in co-present interaction

An introduction to the way in which people arrange themselves spatially in various kinds of focused interaction, especially conversation. It is shown how participants may jointly establish and maintain a spatial-orientational system, referred to as an F-...

Article
Group cohesion, cooperation and synchrony in a social model of language evolution

Experiments conducted in a simulation environment demonstrated that both implicit coordination and explicit cooperation among agents leads to the rapid emergence of systems with key properties of natural languages, even under very pessimistic ...

Article
Pointing gestures and synchronous communication management

The focus of this paper is on pointing gestures that do not function as deictic pointing to a concrete referent but rather as structuring the flow of information. Examples are given on their use in giving feedback and creating a common ground in natural ...

Article
How an agent can detect and use synchrony parameter of its own interaction with a human?

Synchrony is claimed by psychology as a crucial parameter of any social interaction: to give to human a feeling of natural interaction, a feeling of agency [17], an agent must be able to synchronise with this human on appropriate time [29] [11] [15] [16]...

Article
Accessible speech-based and multimodal media center interface for users with physical disabilities

We present a multimodal media center user interface with a hands-free speech recognition input method for users with physical disabilities. In addition to speech input, the application features a zoomable context + focus graphical user interface and ...

Article
A controller-based animation system for synchronizing and realizing human-like conversational behaviors

The Embodied Conversational Agents (ECAs) are an application of virtual characters that is subject of considerable ongoing research. An essential prerequisite for creating believable ECAs is the ability to describe and visually realize multimodal ...

Article
Generating simple conversations

This paper describes the Conversation Simulator, a software program designed to generate simulated conversations. The simulator consists of scriptable agents that can exchange speech acts and conversational signals. The paper illustrates how such a tool ...

Article
Media differences in communication

With the ever-growing ubiquity of computer-mediated communication, the application of language research to computer-mediated environments becomes increasingly relevant. How do overhearer effects, discourse markers, differences for monologues and ...

Article
Towards influencing of the conversational agent mental state in the task of active listening

The proposed paper describes an approach that was used to influence conversational agent Greta’s mental state. The beginning this paper introduces the problem of conversational agents, especially in the listener role. The listener’s backchannels also ...

Article
Integrating emotions in the TRIPLE ECA model

This paper presents the introduction of emotion-based mechanisms in the TRIPLE ECA model. TRIPLE is a hybrid cognitive model consisting of three interacting modules – the reasoning, the connectionist, and the emotion engines – running in parallel. The ...

Article
Manipulating stress and cognitive load in conversational interactions with a multimodal system for crisis management support

The quality assessment of multimodal conversational interfaces is influenced by many factors. Stress and cognitive load are two of most important. In the literature, these two factors are considered as being related and accordingly summarized under the ...

Article
Sentic computing: exploitation of common sense for the development of emotion-sensitive systems

Emotions are a fundamental component in human experience, cognition, perception, learning and communication. In this paper we explore how the use of Common Sense Computing can significantly enhance computers’ emotional intelligence i.e. their capability ...

Article
Face-to-face interaction and the KTH cooking show

We share our experiences with integrating motion capture recordings in speech and dialogue research by describing (1) Spontal, a large project collecting 60 hours of video, audio and motion capture spontaneous dialogues, is described with special ...

Article
Affect listeners: acquisition of affective states by means of conversational systems

We present the concept and motivations for the development of Affect Listeners, conversational systems aiming to detect and adapt to affective states of users, and meaningfully respond to users’ utterances both at the content- and affect-related level. ...

Article
Nonverbal synchrony or random coincidence? how to tell the difference

Nonverbal synchrony in face-to-face interaction has been studied in numerous empirical investigations focusing on various communication channels. Furthermore, the pervasiveness of synchrony in physics, chemistry and biology adds to its face-validity. ...

Article
Biometric database acquisition close to “real world” conditions

In this paper we present an autonomous biometric device developed in the framework of a national project. This system is able to capture speech, hand-geometry, online signature and face, and can open a door when the user is positively verified. ...

Article
Optimizing phonetic encoding for viennese unit selection speech synthesis

While developing lexical resources for a particular language variety (Viennese), we experimented with a set of 5 different phonetic encodings, termed phone sets, used for unit selection speech synthesis. We started with a very rich phone set based on ...

Article
Advances on the use of the foreign language recognizer

This paper presents our most recent activities trying to adapt the foreign language based speech recognition engine for the recognition of the Lithuanian speech commands. As presented in our earlier papers the speakers of less popular languages (such as ...

Article
Challenges in speech processing of slavic languages (case studies in speech recognition of czech and slovak)

Slavic languages pose a big challenge for researchers dealing with speech technology. They exhibit a large degree of inflection, namely declension of nouns, pronouns and adjectives, and conjugation of verbs. This has a large impact on the size of ...

Article
Multiple feature extraction and hierarchical classifiers for emotions recognition

The recognition of the emotional states of speaker is a multi-disciplinary research area that has received great interest in the last years. One of the most important goals is to improve the voiced-based human-machine interactions. Recent works on this ...

Article
Emotional vocal expressions recognition using the COST 2102 italian database of emotional speech

The present paper proposes a new speaker-independent approach to the classification of emotional vocal expressions by using the COST 2102 Italian database of emotional speech. The audio records extracted from video clips of Italian movies possess a ...

Article
Microintonation analysis of emotional speech

The paper addresses reflection of microintonation in male and female acted emotional speech. Microintonation component of speech melody is analyzed regarding its spectral and statistical parameters. Achieved statistical results of microintonation ...

Article
Speech emotion modification using a cepstral vocoder

This paper deals with speech modification using a cepstral vocoder with the intent to change the emotional content of speech. The cepstral vocoder contains the analysis and synthesis stages. The analysis stage performs the estimation of speech ...

Article
Analysis of emotional voice using electroglottogram-based temporal measures of vocal fold opening

Descriptions of emotional voice type have typically been provided in terms of fundamental frequency (f0), intensity and duration. Further features, such as measures of laryngeal characteristics, may help to improve recognition of emotional colouring in ...

Article
Effects of smiling on articulation: lips, larynx and acoustics

The present paper reports on results of a study investigating changes of lip features, larynx position and acoustics caused by smiling while speaking. 20 triplets of words containing one of the vowels /a:/, /i:/, /u:/ were spoken and audiovisually ...

Article
Neural basis of emotion regulation

From the neurobiological point of view emotions can be defined as complex responses to personally-relevant events; such responses are characterized by peculiar subjective feelings and vegetative and motor reactions. In man the complex neural network ...

Article
Automatic meeting participant role detection by dialogue patterns

We introduce a new concept of ‘Vocalization Horizon’ for automatic speaker role detection in general meeting recordings. We demonstrate that classification accuracy reaches 38.5% when Vocalization Horizon and other features (i.e. vocalization duration ...

Article
Linguistic and non-verbal cues for the induction of silent feedback

The aim of this study is to analyze certain linguistic (dialogue acts, morphosyntactic units, semantics) and non-verbal cues (face, hand and body gestures) that may induce the silent feedback of a participant in face-to-face discussions. We analyze the ...

Article
Audiovisual tools for phonetic and articulatory visualization in computer-aided pronunciation training

This paper reviews interactive methods for improving the phonetic competence of subjects in the case of second language learning as well as in the case of speech therapy for subjects suffering from hearing-impairments or articulation disorders. As an ...

Article
Gesture duration and articulator velocity in plosive-vowel-transitions

In this study the gesture duration and articulator velocity in con-sonant-vowel-transitions has been analysed using electromagnetic articulography (EMA). The receiver coils where placed on the tongue, lips and teeth. We found onset and offset durations ...

Contributors
  • University of Campania Luigi Vanvitelli, Caserta
  • Trinity College Dublin
  • Trinity College Dublin
  • Edinburgh Napier University
  • University of Twente
Please enable JavaScript to view thecomments powered by Disqus.

Recommendations