Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/2858036.2858237acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

Chain Reactions: The Impact of Order on Microtask Chains

Published: 07 May 2016 Publication History

Abstract

Microtasks are small units of work designed to be completed individually, eventually contributing to a larger goal. Although microtasks can be performed in isolation, in practice people often complete a chain of microtasks within a single session. Through a series of crowd-based studies, we look at how various microtasks can be chained together to improve efficiency and minimize mental demand, focusing on the writing domain. We find that participants completed low-complexity microtasks faster when they were preceded by the same type of microtask, whereas they found high-complexity microtasks less mentally demanding when pre-ceded by microtasks on the same content. Furthermore, participants were faster at starting high-complexity microtasks after completing lower-complexity microtasks, but completion time and quality were not affected. These findings provide insight into how microtasks can be ordered to optimize transitions from one microtask to another.

References

[1]
David S. Ackerman and Barbara L. Gross. 2005. My instructor made me do it: Task characteristics of procrastination. Journal of Marketing Education 27, 1: 5--13.
[2]
Piotr D. Adamczyk and Brian P. Bailey. 2004. If not now, when?: the effects of interruption at different moments within task execution. Proceedings of the SIGCHI conference on Human factors in Computing Systems, ACM, 271--278.
[3]
Paul S. Adler, Barbara Goldoftas, and David I. Levine. 1999. Flexibility versus efficiency? A case study of model changeovers in the Toyota production system. Organization science 10, 1: 43--68.
[4]
David Allen. 2015. Getting things done: The art of stress-free productivity. Penguin.
[5]
Brian P. Bailey and Shamsi T. Iqbal. 2008. Understanding changes in mental workload during execution of goal-directed tasks and its application for interruption management. ACM Transactions on Computer-Human Interaction (TOCHI) 14, 4: 21.
[6]
Albert Bandura. 1977. Self-efficacy: toward a unifying theory of behavioral change. Psychological review 84, 2: 191.
[7]
Michael S. Bernstein, Greg Little, Robert C. Miller, et al. 2010. Soylent: a word processor with a crowd inside. Proceedings of the 23nd annual ACM symposium on User interface software and technology, ACM, 313--322.
[8]
Jelmer P. Borst, Niels A. Taatgen, and Hedderik van Rijn. 2015. What Makes Interruptions Disruptive?: A Process-Model Account of the Effects of the Problem State Bottleneck on Task Interruption and Resumption. Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, ACM, 2971--2980.
[9]
Eric Butler, Erik Andersen, Adam M. Smith, Sumit Gulwani, Zoran Popovic, and W. A. Redmond. 2015. Automatic Game Progression Design through Analysis of Solution Features. Proc. of the SIGCHI Conf. on Human Factors in Computing (CHI'2015).
[10]
Carrie J. Cai, Philip J. Guo, James R. Glass, and Robert C. Miller. 2015. Wait-Learning: Leveraging wait time for second language education. Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, ACM, 3701--3710.
[11]
Justin Cheng, Jaime Teevan, Shamsi T. Iqbal, and Michael S. Bernstein. 2015. Break it down: A comparison of macro-and microtasks. Proceedings of CHI.
[12]
Lydia B. Chilton, John J. Horton, Robert C. Miller, and Shiri Azenkot. 2010. Task search in a human computation market. Proceedings of the ACM SIGKDD workshop on human computation, ACM, 1--9.
[13]
Fergus IM Craik and Robert S. Lockhart. 1972. Levels of processing: A framework for memory research. Journal of verbal learning and verbal behavior 11, 6: 671--684.
[14]
Peng Dai, Jeffrey M. Rzeszotarski, Praveen Paritosh, and Ed H. Chi. 2015. And Now for Something Completely Different: Improving Crowdsourcing Workflows with Micro-Diversions. Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing, ACM, 628--638.
[15]
Steven Dow, Anand Kulkarni, Scott Klemmer, and Björn Hartmann. 2012. Shepherding the crowd yields better work. Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work, ACM, 1013--1022.
[16]
Darren Edge, Elly Searle, Kevin Chiu, Jing Zhao, and James A. Landay. 2011. MicroMandarin: mobile language learning in context. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, 3169--3178.
[17]
Joseph R. Ferrari, Judith L. Johnson, and William George McCown. 1995. Procrastination and task avoidance: Theory, research, and treatment. Springer Science & Business Media.
[18]
Linda Flower. 1979. Writer-based prose: A cognitive basis for problems in writing. College English: 19--37.
[19]
Tony Gillie and Donald Broadbent. 1989. What makes interruptions disruptive? A study of length, similarity, and complexity. Psychological research 50, 4: 243--250.
[20]
W. Graves, J. Binder, M. Seidenberg, and R. Desai. "Neural correlates of semantic processing in reading aloud." In M. Faust (Ed.), Handbook of the Neuropsychology of Language. Malden, MA: Wiley-Blackwell.
[21]
Nick Greer, Jaime Teevan, and Shamsi T. Iqbal. 2016. An Introduction to Technological Support for Writing. Microsoft Research Tech Report MSR-TR2016-001.
[22]
Sandra G. Hart and Lowell E. Staveland. 1988. Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. Advances in psychology 52: 139--183.
[23]
Suzanne Hidi and Pietro Boscolo. 2006. Motivation and writing. Handbook of writing research: 144--157.
[24]
John Joseph Horton and Lydia B. Chilton. 2010. The labor economics of paid crowdsourcing. Proceedings of the 11th ACM conference on Electronic commerce, ACM, 209--218.
[25]
Panagiotis G. Ipeirotis. 2010. Analyzing the amazon mechanical turk marketplace. XRDS: Crossroads, The ACM Magazine for Students 17, 2: 16--21.
[26]
Shamsi T. Iqbal and Brian P. Bailey. 2008. Effects of intelligent notification management on users and their tasks. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, 93--102.
[27]
Shamsi T. Iqbal and Eric Horvitz. 2007. Disruption and recovery of computing tasks: field study, analysis, and directions. Proceedings of the SIGCHI conference on Human factors in computing systems, ACM, 677--686.
[28]
Walter S. Lasecki, Jeffrey M. Rzeszotarski, Adam Marcus, and Jeffrey P. Bigham. 2015. The Effects of Sequence and Delay on Crowd Work. Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, ACM, 1375--1378.
[29]
Ulf Lundberg, Marianne Granqvist, Tommy Hansson, Marianne Magnusson, and Leif Wallin. 1989. Psychological and physiological stress responses during repetitive work at an assembly line. Work & Stress 3, 2: 143--153.
[30]
Gloria Mark, Daniela Gudith, and Ulrich Klocke. 2008. The cost of interrupted work: more speed and stress. Proceedings of the SIGCHI conference on Human Factors in Computing Systems, ACM, 107--110.
[31]
Andrew A. Mitchell. 1979. Involvement: a potentially important mediator of consumer behavior. Advances in consumer research 6, 1: 191--196.
[32]
Stephen Monsell. 2003. Task switching. Trends in cognitive sciences 7, 3: 134--140.
[33]
Robert R. Morris, Mira Dontcheva, and Elizabeth M. Gerber. 2012. Priming for better performance in microtask crowdsourcing environments. Internet Computing, IEEE 16, 5: 13--19.
[34]
Mohamed Musthag and Deepak Ganesan. 2013. Labor dynamics in a mobile micro-task market. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, 641--650.
[35]
Michael Nebeling, Alexandra To, Anhong Guo, et al. 2016. WearWrite: Crowd-Assisted Writing from Smartwatches. Proceedings of CHI.
[36]
Christine M. Neuwirth, David S. Kaufer, Ravinder Chandhok, and James H. Morris. 2001. Computer support for distributed collaborative writing: A coordination science perspective. Coordination Theory and Collaboration Technology, ed. GM Olson, TW Malone, and JB Smith, Lawrence Erlbaum Associates, NJ.
[37]
Marion C. Panyan and R. Vance Hall. 1978. Effects of Serial versus Concurrent Task Sequencing on Acquisition, Maintenance, and Generalization. Journal of Applied Behavior Analysis 11, 1: 67--74.
[38]
Nathalie Pattyn, Xavier Neyt, David Henderickx, and Eric Soetens. 2008. Psychophysiological investigation of vigilance decrement: boredom or cognitive fatigue? Physiology & Behavior 93, 1: 369--378.
[39]
Ilona R. Posner and Ronald M. Baecker. 1992. How people write together {groupware}. System Sciences, 1992. Proceedings of the Twenty-Fifth Hawaii International Conference on, IEEE, 127--138.
[40]
Bennett A. Rafoth and Donald L. Rubin. 1984. The impact of content and mechanics on judgments of writing quality. Written Communication 1, 4: 446--458.
[41]
Darryl W. Schneider and John R. Anderson. 2010. Asymmetric switch costs as sequential difficulty effects. The Quarterly Journal of Experimental Psychology 63, 10: 1873--1894.
[42]
R. J. Senter and E. A. Smith. 1967. Automated readability index. DTIC Document.
[43]
Gordon Taylor and Peggy Nightingale. 1990. Not mechanics but meaning: Error in tertiary students' writing. Higher Education Research and Development 9, 2: 161--176.
[44]
Jaime Teevan, Shamsi Iqbal, and Curtis von Veh. 2016. Supporting Collaborative Writing with Microtasks. Proceedings of CHI.
[45]
Jaime Teevan, Daniel J. Liebling, and Walter S. Lasecki. 2014. Selfsourcing personal tasks. CHI'14 Extended Abstracts on Human Factors in Computing Systems, ACM, 2527--2532.
[46]
Rajan Vaish, Keith Wyngarden, Jingshu Chen, Brandon Cheung, and Michael S. Bernstein. 2014. Twitch crowdsourcing: crowd contributions in short bursts of time. Proceedings of the 32nd annual ACM conference on Human factors in computing systems, ACM, 3645--3654.
[47]
Glenn Wylie and Alan Allport. 2000. Task switching and the measurement of "switch costs." Psychological research 63, 3--4: 212--233.
[48]
Ming Yin, Yiling Chen, and Yu-An Sun. 2014. Monetary interventions in crowdsourcing task switching. Second AAAI Conference on Human Computation and Crowdsourcing.
[49]
Judith Lynne Zaichkowsky. 1994. The personal involvement inventory: Reduction, revision, and application to advertising. Journal of advertising 23, 4: 59--70.

Cited By

View all
  • (2024)Imagining a Future of Designing with AI: Dynamic Grounding, Constructive Negotiation, and Sustainable MotivationProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3661525(289-300)Online publication date: 1-Jul-2024
  • (2022)Towards AI-powered data-driven educationProceedings of the VLDB Endowment10.14778/3554821.355490015:12(3798-3806)Online publication date: 1-Aug-2022
  • (2022)A Configurational Approach to Attracting Participation in Crowdsourcing Social Innovation: The Case of OpenideoManagement Communication Quarterly10.1177/08933189221108360(089331892211083)Online publication date: 16-Jun-2022
  • Show More Cited By

Index Terms

  1. Chain Reactions: The Impact of Order on Microtask Chains

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CHI '16: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems
    May 2016
    6108 pages
    ISBN:9781450333627
    DOI:10.1145/2858036
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 07 May 2016

    Permissions

    Request permissions for this article.

    Check for updates

    Badges

    • Honorable Mention

    Author Tags

    1. crowdsourcing
    2. microtasks
    3. selfsourcing

    Qualifiers

    • Research-article

    Conference

    CHI'16
    Sponsor:
    CHI'16: CHI Conference on Human Factors in Computing Systems
    May 7 - 12, 2016
    California, San Jose, USA

    Acceptance Rates

    CHI '16 Paper Acceptance Rate 565 of 2,435 submissions, 23%;
    Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

    Upcoming Conference

    CHI '25
    CHI Conference on Human Factors in Computing Systems
    April 26 - May 1, 2025
    Yokohama , Japan

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)19
    • Downloads (Last 6 weeks)2
    Reflects downloads up to 12 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Imagining a Future of Designing with AI: Dynamic Grounding, Constructive Negotiation, and Sustainable MotivationProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3661525(289-300)Online publication date: 1-Jul-2024
    • (2022)Towards AI-powered data-driven educationProceedings of the VLDB Endowment10.14778/3554821.355490015:12(3798-3806)Online publication date: 1-Aug-2022
    • (2022)A Configurational Approach to Attracting Participation in Crowdsourcing Social Innovation: The Case of OpenideoManagement Communication Quarterly10.1177/08933189221108360(089331892211083)Online publication date: 16-Jun-2022
    • (2022)The Crowd is Made of PeopleProceedings of the 2022 Conference on Human Information Interaction and Retrieval10.1145/3498366.3505815(25-35)Online publication date: 14-Mar-2022
    • (2022)AI Chains: Transparent and Controllable Human-AI Interaction by Chaining Large Language Model PromptsProceedings of the 2022 CHI Conference on Human Factors in Computing Systems10.1145/3491102.3517582(1-22)Online publication date: 29-Apr-2022
    • (2022)What Quality Control Mechanisms do We Need for High-Quality Crowd Work?IEEE Access10.1109/ACCESS.2022.320729210(99709-99723)Online publication date: 2022
    • (2021)Task Assignment Strategies for Crowd Worker Ability ImprovementProceedings of the ACM on Human-Computer Interaction10.1145/34795195:CSCW2(1-20)Online publication date: 18-Oct-2021
    • (2021)The Challenge of Variable Effort Crowdsourcing and How Visible Gold Can HelpProceedings of the ACM on Human-Computer Interaction10.1145/34760735:CSCW2(1-26)Online publication date: 18-Oct-2021
    • (2021)Understanding Human-side Impact of Sampling Image Batches in Subjective Attribute LabelingProceedings of the ACM on Human-Computer Interaction10.1145/34760375:CSCW2(1-26)Online publication date: 18-Oct-2021
    • (2021)Crowd-Worker Skill Improvement with AI Co-LearnersProceedings of the 9th International Conference on Human-Agent Interaction10.1145/3472307.3484684(316-322)Online publication date: 9-Nov-2021
    • Show More Cited By

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media