Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/2702123.2702443acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

Understanding Malicious Behavior in Crowdsourcing Platforms: The Case of Online Surveys

Published: 18 April 2015 Publication History

Abstract

Crowdsourcing is increasingly being used as a means to tackle problems requiring human intelligence. With the ever-growing worker base that aims to complete microtasks on crowdsourcing platforms in exchange for financial gains, there is a need for stringent mechanisms to prevent exploitation of deployed tasks. Quality control mechanisms need to accommodate a diverse pool of workers, exhibiting a wide range of behavior. A pivotal step towards fraud-proof task design is understanding the behavioral patterns of microtask workers. In this paper, we analyze the prevalent malicious activity on crowdsourcing platforms and study the behavior exhibited by trustworthy and untrustworthy workers, particularly on crowdsourced surveys. Based on our analysis of the typical malicious activity, we define and identify different types of workers in the crowd, propose a method to measure malicious activity, and finally present guidelines for the efficient design of crowdsourced surveys.

References

[1]
Baba, Y., Kashima, H., Kinoshita, K., Yamaguchi, G., and Akiyoshi, Y. Leveraging crowdsourcing to detect improper tasks in crowdsourcing marketplaces. In Twenty-Fifth IAAI Conference (2013).
[2]
Behrend, T. S., Sharek, D. J., Meade, A. W., and Wiebe, E. N. The viability of crowdsourcing for survey research. Behavior research methods 43, 3 (2011), 800--813.
[3]
Brabham, D. C. Crowdsourcing as a Model for Problem Solving. Convergence: The International Journal of Research into New Media Technologies 14, 1 (Feb. 2008), 75--90.
[4]
Buchholz, S., and Latorre, J. Crowdsourcing preference tests, and how to detect cheating. In INTERSPEECH (2011), 3053--3056.
[5]
Corbin, J., and Strauss, A. Basics of qualitative research: Techniques and procedures for developing grounded theory. Sage, 2008.
[6]
Difallah, D. E., Demartini, G., and Cudré-Mauroux, P. Mechanical cheat: Spamming schemes and adversarial techniques on crowdsourcing platforms. In CrowdSearch (2012), 26--30.
[7]
Dow, S., Kulkarni, A., Bunge, B., Nguyen, T., Klemmer, S., and Hartmann, B. Shepherding the crowd: managing and providing feedback to crowd workers. In CHI'11 Extended Abstracts on Human Factors in Computing Systems, ACM (2011), 1669--1674.
[8]
Dow, S., Kulkarni, A., Klemmer, S., and Hartmann, B. Shepherding the crowd yields better work. In Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work, ACM (2012), 1013--1022.
[9]
Eickhoff, C., and de Vries, A. How crowdsourcable is your task. In Proceedings of the workshop on crowdsourcing for search and data mining (CSDM) at the fourth ACM international conference on web search and data mining (WSDM) (2011), 11--14.
[10]
Eickhoff, C., and de Vries, A. P. Increasing cheat robustness of crowdsourcing tasks. Information retrieval 16, 2 (2013), 121--137.
[11]
Estellés-Arolas, E., and González-Ladrón-de Guevara, F. Towards an integrated crowdsourcing definition. Journal of Information science 38, 2 (2012), 189--200.
[12]
Gadiraju, U., Kawase, R., and Dietze, S. A taxonomy of microtasks on the web. In Proceedings of the 25th ACM conference on Hypertext and social media, ACM (2014), 218--223.
[13]
Gennaro, R., Gentry, C., and Parno, B. Non-interactive verifiable computing: Outsourcing computation to untrusted workers. In Advances in Cryptology-CRYPTO 2010. Springer, 2010, 465--482.
[14]
Glass, R. L., and Vessey, I. Contemporary application-domain taxonomies. IEEE Software 12, 4 (1995), 63--76.
[15]
Ipeirotis, P. G., Provost, F., and Wang, J. Quality management on amazon mechanical turk. In Proceedings of the ACM SIGKDD workshop on human computation, ACM (2010), 64--67.
[16]
Kaufmann, N., Schulze, T., and Veit, D. More than fun and money. worker motivation in crowdsourcing - a study on mechanical turk. In AMCIS (2011).
[17]
Kazai, G., Kamps, J., and Milic-Frayling, N. Worker types and personality traits in crowdsourcing relevance labels. In Proceedings of the 20th ACM international conference on Information and knowledge management, ACM (2011), 1941--1944.
[18]
Kazai, G., Kamps, J., and Milic-Frayling, N. An analysis of human factors and label accuracy in crowdsourcing relevance judgments. Inf. Retr. 16, 2 (2013), 138--178.
[19]
Kittur, A., Chi, E. H., and Suh, B. Crowdsourcing user studies with mechanical turk. In Proceedings of the SIGCHI conference on human factors in computing systems, ACM (2008), 453--456.
[20]
Kittur, A., Nickerson, J. V., Bernstein, M., Gerber, E., Shaw, A., Zimmerman, J., Lease, M., and Horton, J. The future of crowd work. In Proceedings of the 2013 conference on Computer supported cooperative work, ACM (2013), 1301--1318.
[21]
Marshall, C. C., and Shipman, F. M. Experiences surveying the crowd: Reflections on methods, participation, and reliability. In Proceedings of the 5th Annual ACM Web Science Conference, WebSci '13, ACM (New York, NY, USA, 2013), 234--243.
[22]
Mason, W., and Watts, D. J. Financial incentives and the performance of crowds. ACM SigKDD Explorations Newsletter 11, 2 (2010), 100--108.
[23]
Ntalianis, K., Tsapatsoulis, N., Doulamis, A., and Matsatsinis, N. Automatic annotation of image databases based on implicit crowdsourcing, visual concept modeling and evolution. Multimedia Tools and Applications 69, 2 (2014), 397--421.
[24]
Oleson, D., Sorokin, A., Laughlin, G. P., Hester, V., Le, J., and Biewald, L. Programmatic gold: Targeted and scalable quality assurance in crowdsourcing. Human computation 11 (2011), 11.
[25]
Ross, J., Irani, L., Silberman, M., Zaldivar, A., and Tomlinson, B. Who are the crowdworkers?: shifting demographics in mechanical turk. In CHI'10 Extended Abstracts on Human Factors in Computing Systems, ACM (2010), 2863--2872.
[26]
Wang, G., Wilson, C., Zhao, X., Zhu, Y., Mohanlal, M., Zheng, H., and Zhao, B. Y. Serf and turf: crowdturfing for fun and profit. In Proceedings of the 21st international conference on World Wide Web, ACM (2012), 679--688.
[27]
Yuen, M.-C., King, I., and Leung, K.-S. A survey of crowdsourcing systems. In Privacy, security, risk and trust (passat), 2011 ieee third international conference on and 2011 ieee third international conference on social computing (socialcom), IEEE (2011), 766--773.

Cited By

View all
  • (2025)Large Scale Anonymous Collusion and its detection in crowdsourcingExpert Systems with Applications10.1016/j.eswa.2024.125284259(125284)Online publication date: Jan-2025
  • (2024)A System Design Perspective for Business Growth in a Crowdsourced Data Labeling PracticeAlgorithms10.3390/a1708035717:8(357)Online publication date: 15-Aug-2024
  • (2024)Revisiting Bundle Recommendation for Intent-aware Product BundlingACM Transactions on Recommender Systems10.1145/3652865Online publication date: 15-Mar-2024
  • Show More Cited By

Index Terms

  1. Understanding Malicious Behavior in Crowdsourcing Platforms: The Case of Online Surveys

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CHI '15: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems
    April 2015
    4290 pages
    ISBN:9781450331456
    DOI:10.1145/2702123
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 18 April 2015

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. crowdsourcing
    2. malicious intent
    3. microtasks
    4. online surveys
    5. user behavior

    Qualifiers

    • Research-article

    Funding Sources

    Conference

    CHI '15
    Sponsor:
    CHI '15: CHI Conference on Human Factors in Computing Systems
    April 18 - 23, 2015
    Seoul, Republic of Korea

    Acceptance Rates

    CHI '15 Paper Acceptance Rate 486 of 2,120 submissions, 23%;
    Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

    Upcoming Conference

    CHI '25
    CHI Conference on Human Factors in Computing Systems
    April 26 - May 1, 2025
    Yokohama , Japan

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)113
    • Downloads (Last 6 weeks)13
    Reflects downloads up to 22 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2025)Large Scale Anonymous Collusion and its detection in crowdsourcingExpert Systems with Applications10.1016/j.eswa.2024.125284259(125284)Online publication date: Jan-2025
    • (2024)A System Design Perspective for Business Growth in a Crowdsourced Data Labeling PracticeAlgorithms10.3390/a1708035717:8(357)Online publication date: 15-Aug-2024
    • (2024)Revisiting Bundle Recommendation for Intent-aware Product BundlingACM Transactions on Recommender Systems10.1145/3652865Online publication date: 15-Mar-2024
    • (2024)To Err Is AI! Debugging as an Intervention to Facilitate Appropriate Reliance on AI SystemsProceedings of the 35th ACM Conference on Hypertext and Social Media10.1145/3648188.3675130(98-105)Online publication date: 10-Sep-2024
    • (2024)The State of Pilot Study Reporting in Crowdsourcing: A Reflection on Best Practices and GuidelinesProceedings of the ACM on Human-Computer Interaction10.1145/36410238:CSCW1(1-45)Online publication date: 26-Apr-2024
    • (2024)Belief Miner: A Methodology for Discovering Causal Beliefs and Causal Illusions from General PopulationsProceedings of the ACM on Human-Computer Interaction10.1145/36372988:CSCW1(1-37)Online publication date: 26-Apr-2024
    • (2024)Adaptive In-Context Learning with Large Language Models for Bundle GenerationProceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3626772.3657808(966-976)Online publication date: 10-Jul-2024
    • (2024)Archiving and Temporal Analysis of Behavioral Web Data - Tales from the InsideCompanion Proceedings of the ACM Web Conference 202410.1145/3589335.3641260(1373-1374)Online publication date: 13-May-2024
    • (2024)Attention-Based Speech Enhancement Using Human Quality Perception ModelingIEEE/ACM Transactions on Audio, Speech and Language Processing10.1109/TASLP.2023.332828232(250-260)Online publication date: 1-Jan-2024
    • (2024)Mood matters: the interplay of personality in ethical perceptions in crowdsourcingBehaviour & Information Technology10.1080/0144929X.2024.2349786(1-23)Online publication date: 17-May-2024
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media