Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/2642918.2647390acmconferencesArticle/Chapter ViewAbstractPublication PagesuistConference Proceedingsconference-collections
research-article

RichReview: blending ink, speech, and gesture to support collaborative document review

Published: 05 October 2014 Publication History

Abstract

This paper introduces a novel document annotation system that aims to enable the kinds of rich communication that usually only occur in face-to-face meetings. Our system, RichReview, lets users create annotations on top of digital documents using three main modalities: freeform inking, voice for narration, and deictic gestures in support of voice. RichReview uses novel visual representations and time-synchronization between modalities to simplify annotation access and navigation. Moreover, RichReview's versatile support for multi-modal annotations enables users to mix and interweave different modalities in threaded conversations. A formative evaluation demonstrates early promise for the system finding support for voice, pointing, and the combination of both to be especially valuable. In addition, initial findings point to the ways in which both content and social context affect modality choice.

Supplementary Material

suppl.mov (uistf3330-file3.mp4)
Supplemental video

References

[1]
Bickmore, T., Pfeifer, L., and Yin, L. The role of gesture in document explanation by embodied conversational agents. International Journal of Semantic Computing, (2008).
[2]
Birnholtz, J., Steinhardt, S., and Pavese, A. Write here, write now! Proceedings of the SIGCHI Conference on Human Factors in Computing Systems CHI '13, ACM Press (2013), 961.
[3]
Chalfonte, B.L., Fish, R.S., and Kraut, R.E. Expressive richness. Proceedings of the SIGCHI conference on Human factors in computing systems Reaching through technology CHI '91, ACM Press (1991), 21--26.
[4]
Clark, H. and Brennan, S. Grounding in communication. Perspectives on socially shared cognition, (1991).
[5]
Cockburn, A., Gutwin, C., and Alexander, J. Faster document navigation with space-filling thumbnails. Proceedings of the SIGCHI conference on Human Factors in computing systems CHI '06, (2006).
[6]
Grudin, J. Why CSCW applications fail: problems in the design and evaluation of organizational interfaces. Proceedings of the 1988 ACM conference on Computer-supported cooperative work CSCW '88, ACM Press (1988), 85--93.
[7]
Hardock, G., Kurtenbach, G., and Buxton, W. A marking based interface for collaborative writing. Proceedings of the 6th annual ACM symposium on User interface software and technology - UIST '93, ACM Press (1993), 259--266.
[8]
Kraut, R., Galegher, J., Fish, R., and Chalfonte, B. Task Requirements and Media Choice in Collaborative Writing. Human-Computer Interaction 7, 4 (1992), 375--407.
[9]
Kraut, R.E., Gergle, D., and Fussell, S.R. The use of visual information in shared visual spaces. Proceedings of the 2002 ACM conference on Computer supported cooperative work CSCW '02, ACM Press (2002), 31.
[10]
Levine, S. and Ehrlich, S. The Freestyle System. Human-Machine Interactive Systems, (1991).
[11]
Marshall, C.C. and Brush, A.J.B. Exploring the relationship between personal and public annotations. 2004, 349--357.
[12]
Marshall, C.C., Price, M.N., Golovchinsky, G., and Schilit, B.N. Collaborating over portable reading appliances. Personal Technologies 3, 1--2 (1999), 43.
[13]
McNeill, D. Language and gesture. 2000.
[14]
Monserrat, T.-J.K.P., Zhao, S., McGee, K., and Pandey, A.V. NoteVideo. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems CHI '13, ACM Press (2013), 1139.
[15]
Morris;, M.R., Brush;, A.J.B., and Meyers, B.R. Reading revisited: Evaluating the usability of digital display surfaces for active reading tasks. Horizontal Interactive Human-Computer System, (2007), 79 -- 86.
[16]
Neuwirth, C.M., Chandhok, R., Charney, D., Wojahn, P., and Kim, L. Distributed collaborative writing. Proceedings of the SIGCHI conference on Human factors in computing systems celebrating interdependence CHI '94, ACM Press (1994), 51--57.
[17]
Nicholson, R.T. Usage patterns in an integrated voice and data communications system. ACM Transactions on Information Systems 3, 3 (1985), 307--314.
[18]
Noël, S. and Robert, J.-M. Empirical Study on Collaborative Writing: What Do Co-authors Do, Use, and Like? Computer Supported Cooperative Work (CSCW) 13, 1 (2004), 63--89.
[19]
Oviatt, S. Ten myths of multimodal interaction. Communications of the ACM 42, 11 (1999), 74--81.
[20]
Sellen, A. and Harper, R. The Myth of the Paperless Office. MIT Press, .
[21]
Stifelman, L., Arons, B., and Schmandt, C. The audio notebook: paper and pen interaction with structured speech. of the SIGCHI conference on, (2001), 182--189.
[22]
Tsang, M. and Fitzmaurice, G. Boom chameleon: simultaneous capture of 3D viewpoint, voice and gesture annotations on a spatially-aware display. Proceedings of the 15th annual ACM symposium on User interface software and technology, (2002), 111-- 120.
[23]
Whittaker, S., Hirschberg, J., Amento, B., et al. SCANMail. Proceedings of the SIGCHI conference on Human factors in computing systems Changing our world, changing ourselves CHI '02, ACM Press (2002), 275.
[24]
Whittaker, S., Hyland, P., and Wiley, M. FILOCHAT. Proceedings of the SIGCHI conference on Human factors in computing systems celebrating interdependence CHI '94, ACM Press (1994), 271.
[25]
Wilcox, L.D., Schilit, B.N., and Sawhney, N. Dynomite. Proceedings of the SIGCHI conference on Human factors in computing systems CHI '97, ACM Press (1997), 186--193.
[26]
Yoon, D., Chen, N., and Guimbretière, F. TextTearing. Proceedings of the 26th annual ACM symposium on User interface software and technology - UIST '13, ACM Press (2013), 107--112.
[27]
Zheng, Q., Booth, K., and McGrenere, J. Co-authoring with structured annotations. Proceedings of the SIGCHI conference on Human Factors in computing systems CHI '06, ACM Press (2006), 131.

Cited By

View all
  • (2024)Show-and-Tell: An Interface for Delivering Rich Feedback upon Creative Media ArtefactsMultimodal Technologies and Interaction10.3390/mti80300238:3(23)Online publication date: 14-Mar-2024
  • (2024)Generating Automatic Feedback on UI Mockups with Large Language ModelsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642782(1-20)Online publication date: 11-May-2024
  • (2024)Inkeraction: An Interaction Modality Powered by Ink Recognition and SynthesisProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642498(1-26)Online publication date: 11-May-2024
  • Show More Cited By

Index Terms

  1. RichReview: blending ink, speech, and gesture to support collaborative document review

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    UIST '14: Proceedings of the 27th annual ACM symposium on User interface software and technology
    October 2014
    722 pages
    ISBN:9781450330695
    DOI:10.1145/2642918
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 05 October 2014

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. annotation
    2. asynchronous communication
    3. collaborative authoring
    4. multi-modal input
    5. pen interaction
    6. pointing gesture
    7. speech
    8. voice

    Qualifiers

    • Research-article

    Funding Sources

    Conference

    UIST '14

    Acceptance Rates

    UIST '14 Paper Acceptance Rate 74 of 333 submissions, 22%;
    Overall Acceptance Rate 842 of 3,967 submissions, 21%

    Upcoming Conference

    UIST '24

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)33
    • Downloads (Last 6 weeks)7
    Reflects downloads up to 28 Sep 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Show-and-Tell: An Interface for Delivering Rich Feedback upon Creative Media ArtefactsMultimodal Technologies and Interaction10.3390/mti80300238:3(23)Online publication date: 14-Mar-2024
    • (2024)Generating Automatic Feedback on UI Mockups with Large Language ModelsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642782(1-20)Online publication date: 11-May-2024
    • (2024)Inkeraction: An Interaction Modality Powered by Ink Recognition and SynthesisProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642498(1-26)Online publication date: 11-May-2024
    • (2024)Challenges of Music Score Writing and the Potentials of Interactive SurfacesProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642079(1-16)Online publication date: 11-May-2024
    • (2023)Spreadsheets on Interactive Surfaces: Breaking through the Grid with the PenACM Transactions on Computer-Human Interaction10.1145/363009731:2(1-33)Online publication date: 25-Oct-2023
    • (2023)Papeos: Augmenting Research Papers with Talk VideosProceedings of the 36th Annual ACM Symposium on User Interface Software and Technology10.1145/3586183.3606770(1-19)Online publication date: 29-Oct-2023
    • (2023)SlideSpecs: Automatic and Interactive Presentation Feedback CollationProceedings of the 28th International Conference on Intelligent User Interfaces10.1145/3581641.3584035(695-709)Online publication date: 27-Mar-2023
    • (2022)Automatic Speech Recognition Performance Improvement for Mandarin Based on Optimizing Gain Control StrategySensors10.3390/s2208302722:8(3027)Online publication date: 15-Apr-2022
    • (2022)Let’s Study Together: Designing “Study-With-Me” System with the Concept of Social TranslucenceArchives of Design Research10.15187/adr.2022.11.35.4.32535:4(325-341)Online publication date: 30-Nov-2022
    • (2022)Investigating the Use of AR Glasses for Content Annotation on Mobile DevicesProceedings of the ACM on Human-Computer Interaction10.1145/35677276:ISS(430-447)Online publication date: 14-Nov-2022
    • Show More Cited By

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media