Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/1330572.1330580acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
short-paper

Hand posture recognition for human-robot interaction

Published: 15 November 2007 Publication History

Abstract

In this paper, we describe a fast and accurate method for hand posture recognition in video sequences using multiple video cameras. The technique we propose is based on the head detection, skin detection and human body proportions in order to recognize commands from real-time video sequences. Our technique is also a robust one in order to deal with changes of lighting. The experimental results show that it can be used in various vision-based applications that require real-time detection and recognition of hand posture.

References

[1]
Paul, V., and Michael, J. 2001. Robust Real-time Object Detection. Second International Workshop on Statistical and Computational Theories of Vision - Modeling, Learning, Computing, and Sampling, Vancouver, Canada, July 13, 2001.
[2]
Rainer, L., and Jochen, M. 2002. An Extended Set of Haarlike Features for Rapid Object Detection. IEEE ICIP 2002, Vol. 1, pp. 900--903, Sep. 2002
[3]
Pentiuc, S. G., Vatavu, R., Cerlinca, T. I., and Ungureanu, O. 2006. Methods and Algoritms for Gestures Recognition and Understanding. The Eighth All-Ukrainian International Conference, UkrOBRAZ'2006, pp. 15--18, Ukraine, August 2006.
[4]
http://en.wikipedia.org/wiki/HSV_color_space
[5]
http://en.wikipedia.org/wiki/Vitruvian_Man
[6]
Cerlinca, T. I. 2004. A Distributed System for Real Time Traffic Monitoring and Analysis. Advances in Electrical and Computer Engineering, Suceava, Romania, 2 (sept. 2004), volume 4 (11), 82--86

Cited By

View all
  • (2015)Fourier Features For Person Detection in Depth DataComputer Analysis of Images and Patterns10.1007/978-3-319-23192-1_69(824-836)Online publication date: 25-Aug-2015
  • (2013)Designing Gesture-Based Control for Factory AutomationHuman-Computer Interaction – INTERACT 201310.1007/978-3-642-40480-1_13(202-209)Online publication date: 2013
  • (2011)The Understanding of Meaningful Events in Gesture-Based InteractionIntelligent Video Event Analysis and Understanding10.1007/978-3-642-17554-1_1(1-19)Online publication date: 2011
  • Show More Cited By

Recommendations

Reviews

Jason J. Corso

There is great promise in video-based recognition of natural human motion for interaction with machines and robots. However, articulated human motion presents difficult video recognition problems. Many methods, such as PFinder [1], rely on full-body tracking and pose estimation, which are computationally demanding and difficult. Other methods, such as the Everywhere Displays Project [2], fix the interaction to specific locations in the environment, and use localized image processing to solve recognition, leading to more robust but less flexible systems. This paper takes a step toward merging these two paradigms. The driving application is controlling the motion of a robot with various configurations of the operator's arms and hands. For example, both arms up tell the robot to go forward. Although contrived (why not simply use a joystick__?__), the application is representative of the rich and natural interaction with machines that is currently not possible. The authors present a sequential algorithm. First, it locates the operator's head using statistical classifier and heuristic body models. Second, it searches a set of localized regions surrounding the body for the operator's hands. Hand presence is modeled in the hue, saturation, and value (HSV) color-space, and the thresholds are learned based on the previously detected head pixels. Third, upon detecting a valid configuration of the hands, the system instructs the robot to perform the proper command. The particular choice of appearance models and head-tracking algorithms seems ad-hoc, and the results in dynamic environments are poor. For example, no dynamical model such as a Kalman filter is used to track the head. And, the authors suggest robustness to variation in illumination, but then rely on simple color thresholding to do skin detection. Nevertheless, the general approach to human-robot interaction (HRI), part-tracking and part-localized image processing, is promising. Online Computing Reviews Service

Access critical reviews of Computing literature here

Become a reviewer for Computing Reviews.

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
WMISI '07: Proceedings of the 2007 workshop on Multimodal interfaces in semantic interaction
November 2007
67 pages
ISBN:9781595938695
DOI:10.1145/1330572
  • Conference Chairs:
  • Naoto Iwahashi,
  • Mikio Nakano
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 15 November 2007

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. head and hands detection
  2. human-robot interaction
  3. image processing
  4. posture recognition

Qualifiers

  • Short-paper

Conference

ICMI07
Sponsor:

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)5
  • Downloads (Last 6 weeks)0
Reflects downloads up to 09 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2015)Fourier Features For Person Detection in Depth DataComputer Analysis of Images and Patterns10.1007/978-3-319-23192-1_69(824-836)Online publication date: 25-Aug-2015
  • (2013)Designing Gesture-Based Control for Factory AutomationHuman-Computer Interaction – INTERACT 201310.1007/978-3-642-40480-1_13(202-209)Online publication date: 2013
  • (2011)The Understanding of Meaningful Events in Gesture-Based InteractionIntelligent Video Event Analysis and Understanding10.1007/978-3-642-17554-1_1(1-19)Online publication date: 2011
  • (2009)Hand posture recognition in video using multiple cuesProceedings of the 2009 IEEE international conference on Multimedia and Expo10.5555/1698924.1699141(886-889)Online publication date: 28-Jun-2009
  • (2009)Hand posture recognition in video using multiple cues2009 IEEE International Conference on Multimedia and Expo10.1109/ICME.2009.5202637(886-889)Online publication date: Jun-2009
  • (2009)2D/3D Image Data Analysis for Object Tracking and ClassificationAdvances in Machine Learning and Data Analysis10.1007/978-90-481-3177-8_1(1-13)Online publication date: 30-Sep-2009
  • (2009)Multiscale detection of gesture patterns in continuous motion trajectoriesProceedings of the 8th international conference on Gesture in Embodied Communication and Human-Computer Interaction10.1007/978-3-642-12553-9_8(85-97)Online publication date: 25-Feb-2009
  • (2008)Real Time Hand Based Robot Control Using 2D/3D ImagesProceedings of the 4th International Symposium on Advances in Visual Computing, Part II10.1007/978-3-540-89646-3_30(307-316)Online publication date: 1-Dec-2008

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media