Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/2556288.2557407acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
tutorial

EverTutor: automatically creating interactive guided tutorials on smartphones by user demonstration

Published: 26 April 2014 Publication History

Abstract

We present EverTutor, a system that automatically generates interactive tutorials on smartphone from user demonstration. For tutorial authors, it simplifies the tutorial creation. For tutorial users, it provides contextual step-by-step guidance and avoids the frequent context switching between tutorials and users' primary tasks. In order to generate the tutorials automatically, EverTutor records low-level touch events to detect gestures and identify on-screen targets. When a tutorial is browsed, the system uses vision-based techniques to locate the target regions and overlays the corresponding input prompt contextually. It also identifies the correctness of users' interaction to guide the users step by step. We conducted a 6-person user study for creating tutorials and a 12-person user study for browsing tutorials, and we compared EverTutor's interactive tutorials to static and video ones. Study results show that creating tutorials by EverTutor is simpler and faster than producing static and video tutorials. Also, when using the tutorials, the task completion time for interactive tutorials were 3-6 times faster than static and video tutorials regardless of age group. In terms of user preference, 83% of the users chose interactive type as the preferred tutorial type and rated it easiest to follow and easiest to understand.

Supplementary Material

suppl.mov (pn2441-file3.mp4)
Supplemental video

References

[1]
Ames, A. L. Just what they need, just when they need it: An introduction to embedded assistance. In Proc. ACM SIGDOC'01 (2001), 111--115.
[2]
Bergman, L., Castelli, V., Lau, T., and Oblinger, D. Docwizards: A system for authoring follow-me documentation wizards. In Proc. ACM UIST'05 (2005), 191--200.
[3]
Berthouzoz, F., Li, W., Dontcheva, M., and Agrawala, M. A framework for content-adaptive photo manipulation macros: Application to face, landscape, and global manipulations. ACM Trans. Graph (2011), 120:1'120:14.
[4]
Chi, P.-Y., Ahn, S., Ren, A., Dontcheva, M., Li, W., and Hartmann, B. Mixt: Automatic generation of step-by-step mixed media tutorials. In Proc. ACM UIST'12 (2012), 93--102.
[5]
Fernquist, J., Grossman, T., and Fitzmaurice, G. Sketch-sketch revolution: An engaging tutorial system for guided sketching and application learning. In Proc. ACM UIST'11 (2011), 373--382.
[6]
Gomez, L., Neamtiu, I., Azim, T., and Millstein, T. Reran: Timing- and touch-sensitive record and replay for android. In Proc. ACM ICSE'13 (2013), 72--81.
[7]
Google SketchUp Training. http://sketchup.google. com/intl/en/training/index.html.
[8]
Grabler, F., Agrawala, M., Li, W., Dontcheva, M., and Igarashi, T. Generating photo manipulation tutorials by demonstration. In Proc. ACM SIGGRAPH'09 (2009), 66:1'66:9.
[9]
Grossman, T., and Fitzmaurice, G. Toolclips: An investigation of contextual video assistance for functionality understanding. In Proc. ACM CHI'10 (2010), 1515--1524.
[10]
Grossman, T., Matejka, J., and Fitzmaurice, G. Chronicle: Capture, exploration, and playback of document workflow histories. In Proc. ACM UIST'10 (2010), 143--152.
[11]
Harrison, S. M. A comparison of still, animated, or nonillustrated on-line help with written or spoken instructions in a graphical user interface. In Proc. ACM CHI'95 (1995), 82--89.
[12]
Henze, N., Rukzio, E., and Boll, S. 100,000,000 taps: Analysis and improvement of touch performance in the large. In Proc. ACM MobileHCI'11 (2011), 133--142.
[13]
Huang, J., and Twidale, M. B. Graphstract: Minimal graphical help for computers. In Proc. ACM UIST'07 (2007), 203--212.
[14]
Kelleher, C., and Pausch, R. Stencils-based tutorials: Design and evaluation. In Proc. ACM CHI'05 (2005), 541--550.
[15]
Knabe, K. Apple guide: A case study in user-aided design of online help. In Proc. ACM CHI'95 (1995), 286--287.
[16]
Lafreniere, B., Grossman, T., and Fitzmaurice, G. Community enhanced tutorials: Improving tutorials with multiple demonstrations. In Proc. ACM CHI'13 (2013), 1779--1788.
[17]
Nielsen: Average Number of Apps per Smartphone. http://www.nielsen.com/us/en/newswire.html.
[18]
Palaigeorgiou, G., and Despotakis, T. Known and unknown weaknesses in software animated demonstrations (screen-casts): A study in self-paced learning settings.
[19]
Palmiter, S., E. J., and Baggett, P. Animated demonstrations vs written instructions for learning procedural tasks: a preliminary investigation. 687701.
[20]
Palmiter, S., and Elkerton, J. Animated demonstrations for learning procedural computer-based tasks. 193216.
[21]
Pongnumkul, S., Dontcheva, M., Li, W., Wang, J., Bourdev, L., Avidan, S., and Cohen, M. F. Pause-and-play: Automatically linking screencast video tutorials with applications. In Proc. ACM UIST'11 (2011), 135--144.
[22]
Android core gesture set. http://developer.android. com/design/patterns/gestures.html.
[23]
Weir, D., Rogers, S., Murray-Smith, R., and Lochtefeld, M. A user-Specific machine learning approach for improving touch accuracy on mobile devices. In Proc. ACM UIST'12 (2012), 465--476.
[24]
Wiedenbeck, S., and Zila, P. L. Hands-on practice in learning to use software: A comparison of exercise, exploration, and combined formats. 169--196.
[25]
Yeh, T., Chang, T.-H., and Miller, R. C. Sikuli: Using gui screenshots for search and automation. In Proc. ACM UIST'09 (2009), 183--192.
[26]
Yeh, T., Chang, T.-H., Xie, B., Walsh, G., Watkins, I., Wongsuphasawat, K., Huang, M., Davis, L. S., and Bederson, B. B. Creating contextual help for guis using screenshots. In Proc. ACM UIST'11 (2011), 145--154.

Cited By

View all
  • (2024)EasyAsk: An In-App Contextual Tutorial Search Assistant for Older Adults with Voice and Touch InputsProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36785168:3(1-27)Online publication date: 9-Sep-2024
  • (2024)HelpCall: Designing Informal Technology Assistance for Older Adults via VideoconferencingProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642938(1-23)Online publication date: 11-May-2024
  • (2024)TutoAI: a cross-domain framework for AI-assisted mixed-media tutorial creation on physical tasksProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642443(1-17)Online publication date: 11-May-2024
  • Show More Cited By

Index Terms

  1. EverTutor: automatically creating interactive guided tutorials on smartphones by user demonstration

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CHI '14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
    April 2014
    4206 pages
    ISBN:9781450324731
    DOI:10.1145/2556288
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 26 April 2014

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. contextual help
    2. smartphone
    3. touchscreen gesture
    4. tutorials

    Qualifiers

    • Tutorial

    Conference

    CHI '14
    Sponsor:
    CHI '14: CHI Conference on Human Factors in Computing Systems
    April 26 - May 1, 2014
    Ontario, Toronto, Canada

    Acceptance Rates

    CHI '14 Paper Acceptance Rate 465 of 2,043 submissions, 23%;
    Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

    Upcoming Conference

    CHI '25
    CHI Conference on Human Factors in Computing Systems
    April 26 - May 1, 2025
    Yokohama , Japan

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)36
    • Downloads (Last 6 weeks)4
    Reflects downloads up to 22 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)EasyAsk: An In-App Contextual Tutorial Search Assistant for Older Adults with Voice and Touch InputsProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36785168:3(1-27)Online publication date: 9-Sep-2024
    • (2024)HelpCall: Designing Informal Technology Assistance for Older Adults via VideoconferencingProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642938(1-23)Online publication date: 11-May-2024
    • (2024)TutoAI: a cross-domain framework for AI-assisted mixed-media tutorial creation on physical tasksProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642443(1-17)Online publication date: 11-May-2024
    • (2023)SmartRecorder: An IMU-based Video Tutorial Creation by Demonstration System for Smartphone Interaction TasksProceedings of the 28th International Conference on Intelligent User Interfaces10.1145/3581641.3584069(278-293)Online publication date: 27-Mar-2023
    • (2023)Tutoria11y: Enhancing Accessible Interactive Tutorial Creation by Blind Audio ProducersProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3580698(1-14)Online publication date: 19-Apr-2023
    • (2022)SynapseProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/35503216:3(1-24)Online publication date: 7-Sep-2022
    • (2022)Training industrial end‐user programmers with interactive tutorialsSoftware: Practice and Experience10.1002/spe.316753:3(729-747)Online publication date: 11-Nov-2022
    • (2021)HelpViz: Automatic Generation of Contextual Visual Mobile Tutorials from Text-Based InstructionsThe 34th Annual ACM Symposium on User Interface Software and Technology10.1145/3472749.3474812(1144-1153)Online publication date: 10-Oct-2021
    • (2021)Promoting Self-Efficacy Through an Effective Human-Powered Nonvisual Smartphone Task AssistantProceedings of the ACM on Human-Computer Interaction10.1145/34491885:CSCW1(1-19)Online publication date: 22-Apr-2021
    • (2021)A Survey Study of Factors Influencing Smart Phone FluencyEngineering Psychology and Cognitive Ergonomics10.1007/978-3-030-77932-0_30(377-388)Online publication date: 3-Jul-2021
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media