Abstract
This paper discusses Kreader, a proof-of-concept for a new interface for blind or visually impaired users to have text read to them. We use the Kinect device to track the users body. All feedback is presented with auditory cues, while a minimal visual interface can be turned on optionally. Interface elements are organized in a list manner and placed ego-centric, in relation to the user’s body. Moving around in the room does not change the element’s location. Hence visually impaired users can utilize their ”body-sense” to find elements. Two test sessions were used to evaluate Kreader. We think the results are encouraging and provide a solid foundation for future research into such an interface, that can be navigated by sighted and visually impaired users.
Chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
Andersen, M., Jensen, T., Lisouski, P., Mortensen, A., Hansen, M., Gregersen, T., Ahrendt, P.: Kinect Depth Sensor Evaluation for Computer Vision Applications. Technical report, ECE-TR-6, Dept. of Engineering, Aarhus University (2012)
Boyd, L.H., Boyd, W.L., Vanderheiden, G.C.: The Graphical User Interface Crisis: Danger and Opportunity (1990)
Gadea, C., Ionescu, B., Ionescu, D., Islam, S., Solomon, B.: Finger-based gesture control of a collaborative online workspace. In: 2012 7th IEEE International Symposium on Applied Computational Intelligence and Informatics (SACI), pp. 41–46 (May 2012)
Gerling, K., Livingston, I., Nacke, L., Mandryk, R.: Full-body motion-based game interaction for older adults. In: Proceedings of the 2012 ACM Annual Conference on Human Factors in Computing Systems - CHI 2012, p. 1873 (2012)
Harrison, C., Ramamurthy, S., Hudson, S.E.: On-body interaction: armed and dangerous. In: Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction, TEI 2012, pp. 69–76. ACM, New York (2012)
Harrison, C., Tan, D., Morris, D.: Skinput: appropriating the body as an input surface. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 2010, pp. 453–462. ACM, New York (2010)
Hörner, S., Labus, S., Leimpeters, C., Nappert, C., Ruschkowski, A., Talhi, B., Wirth, B., Raab, M.: Navigation via echolocation-like auditory feedback. In: Outlook (2011)
Ionescu, D., Ionescu, B., Gadea, C., Islam, S.: An intelligent gesture interface for controlling tv sets and set-top boxes. In: 2011 6th IEEE International Symposium on Applied Computational Intelligence and Informatics (SACI), pp. 159–164 (May 2011)
Kahn, S., Kuijper, A.: Fusing real-time depth imaging with high precision pose estimation by a measurement arm. In: Cyberworlds, pp. 256–260 (2012)
Kamel Boulos, M.N., Blanchard, B.J., Walker, C., Montero, J., Tripathy, A., Gutierrez-Osuna, R.: Web GIS in practice X: A Microsoft Kinect natural user interface for Google Earth navigation. International Journal of Health Geographics 10(1), 45 (2011)
Khoshelham, K.: Accuracy Analysis of Kinect Depth Data. In: ISPRS Workshop Laser Scanning (2011)
Livingston, M.A., Sebastian, J., Ai, Z., Decker, J.W.: Performance measurements for the Microsoft Kinect skeleton. In: 2012 IEEE Virtual Reality (VR), pp. 119–120 (March 2012)
Majewski, M., Braun, A., Marinc, A., Kuijper, A.: Visual support system for selecting reactive elements in intelligent environments. In: Cyberworlds, pp. 251–255 (2012)
Mann, S., Huang, J., Janzen, R., Lo, R., Rampersad, V., Chen, A., Doha, T.: Blind navigation with a wearable range camera and vibrotactile helmet. In: Proceedings of the 19th ACM International Conference on Multimedia, pp. 1325–1328 (2011)
McMahan, R.P., Alon, A.J.D., Lazem, S.Y., Beaton, R.J., Machaj, D., Schaefer, M., Silva, M.G., Leal, A., Hagan, R., Bowman, D.A.: Evaluating natural interaction techniques in video games. In: 3DUI, pp. 11–14. IEEE (2010)
Morelli, T., Folmer, E.: Real-time sensory substitution to enable players who are blind to play video games using whole body gestures. In: Proceedings of the 6th International Conference on Foundations of Digital Games, FDG 2011, pp. 147–153. ACM, New York (2011)
Pitt, I.J., Edwards, A.D.N.: Pointing in an Auditory Interface for Blind Users. In: Group, pp. 280–285 (1995)
Pitt, I.J., Edwards, A.D.N.: Improving the usability of speech-based interfaces for blind users. In: Proceedings of the Second Annual ACM Conference on Assistive Technologies - Assets 1996, pp. 124–130 (1996)
Turunen, M., Hakulinen, J., Melto, A., Heimonen, T., Laivo, T., Hella, J.: SUXES - User Experience Evaluation Method for Spoken and Multimodal Interaction. In: Methodology, pp. 2567–2570 (2009)
Turunen, M., Soronen, H., Pakarinen, S., Hella, J., Laivo, T., Hakulinen, J., Melto, A., Rajaniemi, J.-P., Mäkinen, E., Heimonen, T., Rantala, J., Valkama, P., Miettinen, T., Raisamo, R.: Accessible multimodal media center application for blind and partially sighted people. Comput. Entertain. 8(3), 16:1–16:30 (2010)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Gross, R., Bockholt, U., Biersack, E.W., Kuijper, A. (2013). Multimodal Kinect-Supported Interaction for Visually Impaired Users. In: Stephanidis, C., Antona, M. (eds) Universal Access in Human-Computer Interaction. Design Methods, Tools, and Interaction Techniques for eInclusion. UAHCI 2013. Lecture Notes in Computer Science, vol 8009. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-39188-0_54
Download citation
DOI: https://doi.org/10.1007/978-3-642-39188-0_54
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-39187-3
Online ISBN: 978-3-642-39188-0
eBook Packages: Computer ScienceComputer Science (R0)