Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/2047196.2047201acmconferencesArticle/Chapter ViewAbstractPublication PagesuistConference Proceedingsconference-collections
research-article

Crowds in two seconds: enabling realtime crowd-powered interfaces

Published: 16 October 2011 Publication History

Abstract

Interactive systems must respond to user input within seconds. Therefore, to create realtime crowd-powered interfaces, we need to dramatically lower crowd latency. In this paper, we introduce the use of synchronous crowds for on-demand, realtime crowdsourcing. With synchronous crowds, systems can dynamically adapt tasks by leveraging the fact that workers are present at the same time. We develop techniques that recruit synchronous crowds in two seconds and use them to execute complex search tasks in ten seconds. The first technique, the retainer model, pays workers a small wage to wait and respond quickly when asked. We offer empirically derived guidelines for a retainer system that is low-cost and produces on-demand crowds in two seconds. Our second technique, rapid refinement, observes early signs of agreement in synchronous crowds and dynamically narrows the search space to focus on promising directions. This approach produces results that, on average, are of more reliable quality and arrive faster than the fastest crowd member working alone. To explore benefits and limitations of these techniques for interaction, we present three applications: Adrenaline, a crowd-powered camera where workers quickly filter a short video down to the best single moment for a photo; and Puppeteer and A|B, which examine creative generation tasks, communication with workers, and low-latency voting.

Supplementary Material

JPG File (fp153.jpg)
MP4 File (fp153.mp4)

References

[1]
Bernstein, A. The Acknowledged Master of the Moment. Washington Post, 2004.
[2]
Bernstein, M.S., Little, G., Miller, R.C., et al. 2010. Soylent: A Word Processor with a Crowd Inside. UIST '10.
[3]
Bigham, J.P., Jayant, C., Ji, H., et al. 2010. VizWiz: Nearly Real-time Answers to Visual Questions. UIST '10.
[4]
Card, S.K., Moran, T.P., and Newell, A. 1983. The psy-chology of human-computer interaction. Lawrence Erlbaum.
[5]
Chilton, L.B., Horton, J.J., Miller, R.C., and Azenkot, S. 2010. Task search in a human computation market. HCOMP '10.
[6]
Cohen, M.F. and Szeliski, R. 2006. The moment camera. Computer 39, 8, 40--45.
[7]
Dow, S.P., Bunge, B., Nguyen, T., et al. 2011. Shepherding the Crowd: Managing and Providing Feedback to Crowd Workers. Ext. Abs. CHI '11.
[8]
Greenberg, S. and Bohnet, R. 1991. GroupSketch: A multi-user sketchpad for geographically-distributed small groups. Graphics Interface'91.
[9]
Hacker, S. and Von Ahn, L. 2009. Matchin: eliciting user preferences with an online game. CHI '09.
[10]
Igarashi, T., Moscovich, T., and Hughes, J.F. 2005. As-rigid-as-possible shape manipulation. ACM Transactions on Graphics (TOG), 1134--1141.
[11]
Ishii, H. and Kobayashi, M. 1992. ClearBoard: a seamless medium for shared drawing and conversation with eye contact. CHI '92.
[12]
I o, T. and Yagnik, J. 2009. Smart Thumbnails on YouTube. Google Research Blog.
[13]
Kittur, A. 2010. Crowdsourcing, collaboration and crea-tivity. XRDS 17, 2, 22--26.
[14]
Koblin, A.M. 2009. The sheep market. C&C '09.
[15]
Little, G., Chilton, L., Goldman, M., and Miller, R.C. 2010. Exploring iterative and parallel human computation processes. HCOMP '09.
[16]
Little, G., Chilton, L., Goldman, M., and Miller, R.C. 2010. TurKit: Human Computation Algorithms on Mechanical Turk. UIST '10.
[17]
Mao, A., Parkes, D.C., Procaccia, A.D., and Zhang, H. 2011. Human Computation and Multiagent Systems: An Algorithmic Perspective. AAAI '11.
[18]
Mason, W. and Suri, S. 2010. A Guide to Conducting Be-havioral Research on Amazon's Mechanical Turk.
[19]
Mason, W. and Watts, D.J. 2009. Financial Incentives and the "Performance of Crowds." HCOMP '09.
[20]
Nielsen, J. 1993. Usability engineering. Morgan Kaufmann.
[21]
Schurman, E. and Brutlag, J. 2009. Performance related changes and their user impact. Velocity 2009.
[22]
Simon, H.A. 1956. Rational choice and the structure of the environment. Psychological review 63, 2, 129.
[23]
Sorokin, A., Berenson, D., Srinivasa, S.S., and Hebert, M. 2010. People helping robots helping people: Crowdsourc-ing for grasping novel objects. IROS '10.
[24]
Surowiecki, J. 2005. The Wisdom of Crowds. Random House, New York.
[25]
Toomim, M., Kriplean, T., Pörtner, C., and Landay, J.A. 2011. Utility of Human-Computer Interactions: Toward a Science of Preference Measurement. CHI '11.
[26]
Von Ahn, L. and Dabbish, L. 2004. Labeling images with a computer game. CHI '04.
[27]
Wang, J., Faridani, S., and Ipeirotis, P.G. 2011. Estimating the Completion Time of Crowdsourced Tasks Using Sur-vival Analysis Models. CSDM '11.
[28]
Yan, T., Kumar, V., and Ganesan, D. 2010. CrowdSearch. MobiSys '10.

Cited By

View all
  • (2024)Videogenic: Identifying Highlight Moments in Videos with Professional Photographs as a PriorProceedings of the 16th Conference on Creativity & Cognition10.1145/3635636.3656186(328-346)Online publication date: 23-Jun-2024
  • (2024)Trustworthy human computation: a surveyArtificial Intelligence Review10.1007/s10462-024-10974-157:12Online publication date: 12-Oct-2024
  • (2024)Dynamic Labeling: A Control System for Labeling Styles in Image Annotation TasksHuman Interface and the Management of Information10.1007/978-3-031-60107-1_8(99-118)Online publication date: 1-Jun-2024
  • Show More Cited By

Index Terms

  1. Crowds in two seconds: enabling realtime crowd-powered interfaces

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    UIST '11: Proceedings of the 24th annual ACM symposium on User interface software and technology
    October 2011
    654 pages
    ISBN:9781450307161
    DOI:10.1145/2047196
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 16 October 2011

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. crowdsourcing
    2. human computation

    Qualifiers

    • Research-article

    Conference

    UIST '11

    Acceptance Rates

    UIST '11 Paper Acceptance Rate 67 of 262 submissions, 26%;
    Overall Acceptance Rate 561 of 2,567 submissions, 22%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)77
    • Downloads (Last 6 weeks)2
    Reflects downloads up to 16 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Videogenic: Identifying Highlight Moments in Videos with Professional Photographs as a PriorProceedings of the 16th Conference on Creativity & Cognition10.1145/3635636.3656186(328-346)Online publication date: 23-Jun-2024
    • (2024)Trustworthy human computation: a surveyArtificial Intelligence Review10.1007/s10462-024-10974-157:12Online publication date: 12-Oct-2024
    • (2024)Dynamic Labeling: A Control System for Labeling Styles in Image Annotation TasksHuman Interface and the Management of Information10.1007/978-3-031-60107-1_8(99-118)Online publication date: 1-Jun-2024
    • (2023)“Sometimes It’s Like Putting the Track in Front of the Rushing Train”: Having to Be ‘On Call’ for Work Limits the Temporal Flexibility of CrowdworkersACM Transactions on Computer-Human Interaction10.1145/363514531:2(1-45)Online publication date: 4-Dec-2023
    • (2023)Judgment Sieve: Reducing Uncertainty in Group Judgments through Interventions Targeting Ambiguity versus DisagreementProceedings of the ACM on Human-Computer Interaction10.1145/36100747:CSCW2(1-26)Online publication date: 4-Oct-2023
    • (2023)Navigates Like Me: Understanding How People Evaluate Human-Like AI in Video GamesProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3581348(1-18)Online publication date: 19-Apr-2023
    • (2023)Quality Control of Crowd Labeling for Improving the Quality of Peer Assessments2023 IEEE Frontiers in Education Conference (FIE)10.1109/FIE58773.2023.10343508(1-9)Online publication date: 18-Oct-2023
    • (2023)Multilabel classification using crowdsourcing under budget constraintsKnowledge and Information Systems10.1007/s10115-023-01973-966:2(841-877)Online publication date: 9-Sep-2023
    • (2022)Evaluation of the effectiveness of a crowdsourcing-based crime detection systemIEICE Communications Express10.1587/comex.2022XBL009911:9(607-611)Online publication date: 1-Sep-2022
    • (2022)Understanding User Perceptions of Response Delays in Crowd-Powered Conversational SystemsProceedings of the ACM on Human-Computer Interaction10.1145/35557656:CSCW2(1-42)Online publication date: 11-Nov-2022
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media