Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/1852786.1852832acmconferencesArticle/Chapter ViewAbstractPublication PagesesemConference Proceedingsconference-collections
research-article

Exploring the use of crowdsourcing to support empirical studies in software engineering

Published: 16 September 2010 Publication History

Abstract

The power and the generality of the findings obtained through empirical studies are bounded by the number and type of participating subjects. In software engineering, obtaining a large number of adequate subjects to evaluate a technique or tool is often a major challenge. In this work we explore the use of crowdsourcing as a mechanism to address that challenge by assisting in subject recruitment. More specifically, through this work we show how we adapted a study to be performed under an infrastructure that not only makes it possible to reach a large base of users but it also provides capabilities to manage those users as the study is being conducted. We discuss the lessons we learned through this experience, which illustrate the potential and tradeoffs of crowdsourcing software engineering studies.

References

[1]
Julie S. Downs, Mandy B. Holbrook, Steve Sheng, and Lorrie Faith Cranor. Are your participants gaming the system?: screening Mechanical Turk workers. In Proceedings of the 28th international conference on Human factors in computing systems, 2010.
[2]
Brynn M. Evans and Ed H. Chi. Towards a model of understanding social search. In Proceedings of the 2008 ACM conference on Computer supported cooperative work, 2008.
[3]
Martin Fowler and Kent Beck. Refactoring: Improving the Design of Existing Code. Addison-Wesley, 1999.
[4]
Jeffrey Heer and Michael Bostock. Crowdsourcing graphical perception: using Mechanical Turk to assess visualization design. In Proceedings of the 28th international conference on Human factors in computing systems, 2010.
[5]
Jeff Howe. The rise of crowdsourcing. Wired Magazine, 14(06):17--23, 2006.
[6]
M. Cameron Jones and Elizabeth F. Churchill. Conversations in Developer Communities: A Preliminary Analysis of the Yahoo! Pipes Community. In Proceedings of the Fourth International Conference on Communities and Technologies, 2009.
[7]
Aniket Kittur, Ed H. Chi, and Bongwon Suh. Crowdsourcing user studies with Mechanical Turk. In Proceeding of the twenty-sixth annual SIGCHI conference on Human factors in computing systems, 2008.
[8]
Greg Little, Lydia B. Chilton, Max Goldman, and Robert C. Miller. Turkit: tools for iterative tasks on Mechanical Turk. In Proceedings of the ACM SIGKDD Workshop on Human Computation, 2009.
[9]
W. Mason and D. J. Watts. Financial Incentives and the Performance of Crowds. In Proceedings of the ACM SIGKDD Workshop on Human Computation, pages 77--85. ACM, 2009.
[10]
Amazon Mechanical Turk. https://www.mturk.com/mturk/welcome, June 2010.
[11]
Yahoo! Pipes. http://pipes.yahoo.com/, July 2009.
[12]
Joel Ross, Lilly Irani, M. Six Silberman, Andrew Zaldivar, and Bill Tomlinson. Who are the crowdworkers?: shifting demographics in Mechanical Turk. In Proceedings of the 28th of the international conference extended abstracts on Human factors in computing systems, 2010.
[13]
Christopher Scaffidi, Mary Shaw, and Brad Myers. Estimating the numbers of end users and end user programmers. In Symposium on Visual Languages and Human Centric Computing, 2005.
[14]
Rion Snow, Brendan O'Connor, Daniel Jurafsky, and Andrew Y. Ng. Cheap and fast---but is it good?: evaluating non-expert annotations for natural language tasks. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2008.
[15]
Kathryn T. Stolee. Analysis and transformation of pipe-like web mashups for end user programmers. Master's Thesis, University of Nebraska-Lincoln, June 2010.
[16]
Survey Vault. http://www.surveyvault.co.uk/, July 2010.

Cited By

View all
  • (2023)Human–Computer Interaction and Participation in Software CrowdsourcingElectronics10.3390/electronics1204093412:4(934)Online publication date: 13-Feb-2023
  • (2023)PlayTest: A Gamified Test Generator for GamesProceedings of the 2nd International Workshop on Gamification in Software Development, Verification, and Validation10.1145/3617553.3617884(47-51)Online publication date: 4-Dec-2023
  • (2023)Do CONTRIBUTING Files Provide Information about OSS Newcomers’ Onboarding Barriers?Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering10.1145/3611643.3616288(16-28)Online publication date: 30-Nov-2023
  • Show More Cited By

Index Terms

  1. Exploring the use of crowdsourcing to support empirical studies in software engineering

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    ESEM '10: Proceedings of the 2010 ACM-IEEE International Symposium on Empirical Software Engineering and Measurement
    September 2010
    423 pages
    ISBN:9781450300391
    DOI:10.1145/1852786
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 16 September 2010

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. crowdsourcing
    2. empirical studies

    Qualifiers

    • Research-article

    Funding Sources

    Conference

    ESEM '10
    Sponsor:

    Acceptance Rates

    ESEM '10 Paper Acceptance Rate 30 of 102 submissions, 29%;
    Overall Acceptance Rate 130 of 594 submissions, 22%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)15
    • Downloads (Last 6 weeks)1
    Reflects downloads up to 18 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2023)Human–Computer Interaction and Participation in Software CrowdsourcingElectronics10.3390/electronics1204093412:4(934)Online publication date: 13-Feb-2023
    • (2023)PlayTest: A Gamified Test Generator for GamesProceedings of the 2nd International Workshop on Gamification in Software Development, Verification, and Validation10.1145/3617553.3617884(47-51)Online publication date: 4-Dec-2023
    • (2023)Do CONTRIBUTING Files Provide Information about OSS Newcomers’ Onboarding Barriers?Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering10.1145/3611643.3616288(16-28)Online publication date: 30-Nov-2023
    • (2023)What’s (Not) Working in Programmer User Studies?ACM Transactions on Software Engineering and Methodology10.1145/358715732:5(1-32)Online publication date: 24-Jul-2023
    • (2023)How R Developers explain their Package Choice: A Survey2023 ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM)10.1109/ESEM56168.2023.10304869(1-12)Online publication date: 26-Oct-2023
    • (2021)Towards a Methodology for Participant Selection in Software Engineering ExperimentsProceedings of the 15th ACM / IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM)10.1145/3475716.3484273(1-6)Online publication date: 11-Oct-2021
    • (2021)Conducting Malicious Cybersecurity Experiments on Crowdsourcing PlatformsProceedings of the 2021 3rd International Conference on Big Data Engineering10.1145/3468920.3468942(150-161)Online publication date: 29-May-2021
    • (2021)Studying Programmer Behaviour at Scale: A Case Study using Amazon Mechanical TurkCompanion Proceedings of the 5th International Conference on the Art, Science, and Engineering of Programming10.1145/3464432.3464436(36-48)Online publication date: 22-Mar-2021
    • (2021)Towards an Extensible Architecture for an Empirical Software Engineering Computational PlatformComputational Science and Its Applications – ICCSA 202110.1007/978-3-030-87013-3_18(231-246)Online publication date: 10-Sep-2021
    • (2020)Empirical Software Engineering Experimentation with Human ComputationContemporary Empirical Methods in Software Engineering10.1007/978-3-030-32489-6_7(173-215)Online publication date: 28-Aug-2020
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media