Abstract
As technology evolves (e.g. 3D cameras, accelerometers, multitouch surfaces, etc.) new gestural interaction methods are becoming part of the everyday use of computational devices. This trend forces practitioners to develop applications for each interaction method individually. This paper tackles the problem of interpreting gestures in a multiple ways of interaction scenario, by focusing on the abstract gesture rather than on the technology or technologies used to generate it. This article describes the Flash Library for Interpreting Natural Gestures (FLING), a framework for developing multi-gestural applications integrated and running in different gestural-platforms. By offering an architecture for the integration and unification of different types of interaction, FLING eases scalability while presenting an environment for rapid prototyping by novice multi-gestural programmers. Throughout the article we analyse the benefits of this approach, comparing it with state of the art technologies, describe the framework architecture, and present several examples of applications and experiences of use.
Chapter PDF
Similar content being viewed by others
References
Lester, J., Hurvitz, P., Chaudhri, R., Hartung, C., Borriello, G.: MobileSense-Sensing modes of transportation in studies of the built environment. In: UrbanSense 2008, pp. 46–50 (2008)
Dragicevic, P., Fekete, J.: Input device selection and interaction configuration with ICON. In: Blanford, A., Vanderdonkt, J., Gray, P. (eds.) People and Computers XV Interaction without Frontiers: Joint Proceedings of IHM 2001 and HCI 2001 (IHM-HCI 2001), pp. 543–558. Springer, Heidelberg (2001)
Flippo, F., Krebs, A., Marsic, I.: A framework for rapid development of multimodal interfaces. In: 5th International Conference on Multimodal Interfaces (ICMI 2003), pp. 109–116. ACM, New York (2003)
Serrano, M., Nigay, L., Lawson, J., Ramsay, A., Murray-Smith, R., Denef, S.: The openinterface framework: a tool for multimodal interaction. In: CHI 2008 Extended Abstracts on Human Factors in Computing Systems (CHI EA 2008), pp. 3501–3506. ACM, New York (2008)
Touchlib: an opensource multi-touch framework, http://www.whitenoiseaudio.com/touchlib
Kaltenbrunner, M., Bencina, R.: reacTIVision: a computer-vision framework for table-based tangible interaction. In: 1st International Conference on Tangible and Embedded Interaction (TEI 2007), pp. 69–74. ACM, New York (2007)
Community Core Vision, http://ccv.nuigroup.com/
Bederson, B.B., Grosjean, J., Meyer, J.: Toolkit Design for Interactive Structured Graphics. IEEE Trans. Softw. Eng. 30(8), 535–546 (2004)
Gokcezade, A., Leitner, J., Haller, M.: LightTracker: An Open-Source Multitouch Toolkit. J. Comput. Entertain. 8, article 19 (2010)
VVVV, http://vvvv.org/
Hansen, T.E., Hourcade, J.P., Virbel, M., Patali, S., Serra, T.: PyMT: a post-WIMP multi-touch user interface toolkit. In: ACM International Conference on Interactive Tabletops and Surfaces (ITS 2009), pp. 17–24. ACM, New York (2009)
De Nardi, A.: Grafiti: Gesture Recognition mAnagement Framework for Interactive Tabletop Interfaces. Diploma thesis. University of Pisa (2008)
Surface SDK, http://msdn.microsoft.com/en-us/library/ee804845.aspx
Esenther, A., Forlines, C., Ryall, K., Shipman, S.: DiamondTouch SDK: Support for Multi-User, Multi-Touch Applications. Mitsubishi Electronics Research Laboratory, Report No. TF2002-48 (2002)
Gestureworks, http://gestureworks.com/
König, W.A., Rädle, R., Reiterer, H.: Squidy: a zoomable design environment for natural user interfaces. In: 27th International Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA 2009), pp. 4561–4566. ACM, New York (2009)
Ramanahally, P., Gilbert, S., Niedzielski, T., Velázquez, D., Anagnost, C.: Sparsh UI: A Multi-Touch Framework for Collaboration and Modular Gesture Recognition. In: Proc. of WINVR 2009, Conference on Innovative Virtual Reality, pp. 1–6 (2009)
Scholliers, C., Hoste, L., Signer, B., De Meuter, W.: Midas: a declarative multi-touch interaction framework. In: 5th International Conference on Tangible, Embedded, and embodied Interaction (TEI 2011), pp. 49–56. ACM, New York (2011)
Echtler, F., Klinker, G.: A Multitouch Software Architecture. In: 5th Nordic Conference on Human-Computer Interaction (NordiCHI 2008), pp. 463–466 (2008)
Laufs, U., Ruff, C., Zibuschka, J.: MT4j - A Cross-platform Multi-touch Development Framework. In: Engineering Patterns for Multi-Touch Interfaces 2010, Workshop of the ACM EICS (2010)
Blom, S., Book, M., Gruhn, V., Hrushchak, R., Kohler, A.: Write Once, Run Anywhere A Survey of Mobile Runtime Environments. In: 3rd International Conference on Grid and Pervasive Computing – Workshops, pp. 132–137. IEEE Press, New York (2008)
Jordà, S., Geiger, G., Alonso, M., Kaltenbrunner, M.: The reacTable: exploring the synergy between live music performance and tabletop tangible interfaces. In: 1st International Conference on Tangible and Embedded Interaction (TEI 2007), pp. 139–146. ACM, New York (2007)
Bencina, R., Kaltenbrunner, M.: The design and evolution of fiducials for the reactivision system. In: 3rd International Conference on Generative Systems in the Electronic Arts (3rd Iteration 2005), Melbourne, Australia (2005)
Wang, F., Ren, X., Liu, Z.: A Robust Blob Recognition and Tracking Method in Vision-Based Multi-touch Technique. In: International Symposium on Parallel and Distributed Processing with Applications (ISPA 2008), pp. 971–974. IEEE Press, New York (2008)
Kaltenbrunner, M., Bovermann, T., Bencina, R., Costanza, E.: TUIO: A protocol for table-top tangible user interfaces. In: 6th Int’l. Workshop on Gesture in Human-Computer Interaction and Simulation (2005)
TUIO Flash client library, http://www.tuio.org/?flash
Gaines, B., Shaw, M.: A learning model for forecasting the future of information technology. J. Future Computing Systems 1, 31–69 (1986)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 IFIP International Federation for Information Processing
About this paper
Cite this paper
Llinás, P., García-Herranz, M., Haya, P.A., Montoro, G. (2011). Unifying Events from Multiple Devices for Interpreting User Intentions through Natural Gestures. In: Campos, P., Graham, N., Jorge, J., Nunes, N., Palanque, P., Winckler, M. (eds) Human-Computer Interaction – INTERACT 2011. INTERACT 2011. Lecture Notes in Computer Science, vol 6946. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-23774-4_46
Download citation
DOI: https://doi.org/10.1007/978-3-642-23774-4_46
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-23773-7
Online ISBN: 978-3-642-23774-4
eBook Packages: Computer ScienceComputer Science (R0)