Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/1107548.1107586acmotherconferencesArticle/Chapter ViewAbstractPublication Pagessoc-eusaiConference Proceedingsconference-collections
Article

Levels of interaction allowing humans to command, interrogate and teach a communicating object: lessons learned from two robotic platform

Published: 12 October 2005 Publication History

Abstract

As robotic systems become increasingly capable of complex sensory, motor and information processing functions, the ability to interact with them in an ergonomic, real-time and adaptive manner becomes an increasingly pressing concern. In this context, the physical characteristics of the robotic device should become less of a direct concern, with the device being treated as a system that receives information, acts on that information, and produces information. Once the input and output protocols for a given system are well established, humans should be able to interact with these systems via a standardized spoken language interface that can be tailored if necessary to the specific system.The objective of this research is to develop a generalized approach for human-machine interaction via spoken language that allows interaction at three levels. The first level is that of commanding or directing the behavior of the system. The second level is that of interrogating or requesting an explanation from the system. The third and most advanced level is that of teaching the machine a new form of behavior. The mapping between sentences and meanings in these interactions is guided by a neuropsychologically inspired model of grammatical construction processing. We explore these three levels of communication on two distinct robotic platforms. The novelty of this work lies in the use of the construction grammar formalism for binding language to meaning extracted from video in a generative and productive manner, and in thus allowing the human to use language to command, interrogate and modify the behavior of the robotic systems.

References

[1]
Bates E, McNew S, MacWhinney B, Devescovi A, Smith S (1982) Functional constraints on sentence processing: A cross linguistic study, Cognition, (11) 245--299.
[2]
Chang NC, Maia TV (2001) Grounded learning of grammatical constructions, AAAI Spring Symp. On Learning Grounded Representations, Stanford CA.
[3]
Dominey PF (2000) Conceptual Grounding in Simulation Studies of Language Acquisition, Evolution of Communication, 4(1), 57--85.
[4]
Dominey PF (2005) Towards a Construction-Based Account of Shared Intentions in Social Cognition, Comment on Tomasello et al. Understanding and sharing intentions: The origins of cultural cognition, Behavioral and Brain Sciences
[5]
Dominey PF, Alvarez M, Gao B, Jeambrun M, Weitzenfeld A, Medrano A (2005) Robot Command, Interrogation and Teaching via Social Interaction, Proc, IEEE Conf. On Humanoid Robotics, 2005.
[6]
Dominey PF, Boucher (2005) Developmental stages of perception and language acquisition in a perceptually grounded robot, In press, Cognitive Systems Research
[7]
Dominey PF, Hoen M, Lelekov T, Blanc JM (2003) Neurological basis of language in sequential cognition: Evidence from simulation, aphasia and ERP studies, Brain and Language, 86(2):207--25
[8]
Dominey PF, Inui T (2004) A Developmental Model of Syntax Acquisition in the Construction Grammar Framework with Cross-Linguistic Validation in English and Japanese, Proceedings of the CoLing Workshop on Psycho-Computational Models of Language Acquisition, Geneva, 33--40
[9]
Goldberg A (1995) Constructions. U Chicago Press, Chicago and London.
[10]
Gorniak P, Roy D (2004). Grounded Semantic Composition for Visual Scenes, Journal of Artificial Intelligence Research, Volume 21, pages 429--470.
[11]
Kotovsky L, Baillargeon R, The development of calibration-based reasoning about collision events in young infants. 1998, Cognition, 67, 311--351
[12]
Siskind JM (2001) Grounding the lexical semantics of verbs in visual perception using force dynamics and event logic. Journal of AI Research, (15) 31--90
[13]
Steels, L. and Baillie, JC. (2003). Shared Grounding of Event Descriptions by Autonomous Robots. Robotics and Autonomous Systems, 43(2-3):163--173. 2002
[14]
Tomasello, M. (2003) Constructing a language: A usage-based theory of language acquisition. Harvard University Press, Cambridge.

Information & Contributors

Information

Published In

cover image ACM Other conferences
sOc-EUSAI '05: Proceedings of the 2005 joint conference on Smart objects and ambient intelligence: innovative context-aware services: usages and technologies
October 2005
316 pages
ISBN:1595933042
DOI:10.1145/1107548
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 12 October 2005

Permissions

Request permissions for this article.

Check for updates

Qualifiers

  • Article

Conference

sOc-EUSAI05
sOc-EUSAI05: Smart Objects & Ambient Intelligence
October 12 - 14, 2005
Grenoble, France

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 207
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 29 Sep 2024

Other Metrics

Citations

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media