Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article
Open access

The State of Pilot Study Reporting in Crowdsourcing: A Reflection on Best Practices and Guidelines

Published: 26 April 2024 Publication History

Abstract

Pilot studies are an essential cornerstone of the design of crowdsourcing campaigns, yet they are often only mentioned in passing in the scholarly literature. A lack of details surrounding pilot studies in crowdsourcing research hinders the replication of studies and the reproduction of findings, stalling potential scientific advances. We conducted a systematic literature review on the current state of pilot study reporting at the intersection of crowdsourcing and HCI research. Our review of ten years of literature included 171 articles published in the proceedings of the Conference on Human Computation and Crowdsourcing (AAAI HCOMP) and the ACM Digital Library. We found that pilot studies in crowdsourcing research (i.e., crowd pilot studies) are often under-reported in the literature. Important details, such as the number of workers and rewards to workers, are often not reported. On the basis of our findings, we reflect on the current state of practice and formulate a set of best practice guidelines for reporting crowd pilot studies in crowdsourcing research. We also provide implications for the design of crowdsourcing platforms and make practical suggestions for supporting crowd pilot study reporting.

References

[1]
Utku Günay Acer, Marc van den Broeck, Claudio Forlivesi, Florian Heller, and Fahim Kawsar. 2019. Scaling Crowdsourcing with Mobile Workforce: A Case Study with Belgian Postal Service. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., Vol. 3, 2, Article 35 (2019), 32 pages. https://doi.org/10.1145/3328906
[2]
Bhoomika Agarwal, Abhiram Ravikumar, and Snehanshu Saha. 2016. A Novel Approach to Big Data Veracity Using Crowdsourcing Techniques and Bayesian Predictors. In Proceedings of the 9th Annual ACM India Conference (COMPUTE '16). ACM, New York, NY, USA, 153--160. https://doi.org/10.1145/2998476.2998498
[3]
Jonathan Aigrain, Arnaud Dapogny, Kévin Bailly, Séverine Dubuisson, Marcin Detyniecki, and Mohamed Chetouani. 2016. On Leveraging Crowdsourced Data for Automatic Perceived Stress Detection. In Proceedings of the 18th ACM International Conference on Multimodal Interaction (ICMI '16). ACM, New York, NY, USA, 113--120. https://doi.org/10.1145/2993148.2993200
[4]
Alan Aipe and Ujwal Gadiraju. 2018. SimilarHITs: Revealing the Role of Task Similarity in Microtask Crowdsourcing. In Proceedings of the 29th on Hypertext and Social Media (HT '18). ACM, New York, NY, USA, 115--122. https://doi.org/10.1145/3209542.3209558
[5]
Fouad Alallah, Ali Neshati, Nima Sheibani, Yumiko Sakamoto, Andrea Bunt, Pourang Irani, and Khalad Hasan. 2018. Crowdsourcing vs Laboratory-Style Social Acceptability Studies? Examining the Social Acceptability of Spatial User Interactions for Head-Worn Displays. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). ACM, New York, NY, USA, 1--7. https://doi.org/10.1145/3173574.3173884
[6]
Omar Alonso. 2009. Guidelines for Designing Crowdsourcing-based Relevance Experiments.
[7]
Omar Alonso, Catherine C. Marshall, and Marc Najork. 2013. Are Some Tweets More Interesting Than Others? #HardQuestion. In Proceedings of the Symposium on Human-Computer Interaction and Information Retrieval (HCIR '13). ACM, New York, NY, USA, Article 2, 10 pages. https://doi.org/10.1145/2528394.2528396
[8]
Omar Alonso, Catherine C. Marshall, and Marc Najork. 2015. Debugging a Crowdsourced Task with Low Inter-Rater Agreement. In Proceedings of the 15th ACM/IEEE-CS Joint Conference on Digital Libraries (JCDL '15). ACM, New York, NY, USA, 101--110. https://doi.org/10.1145/2756406.2757741
[9]
Abdullah Alshaibani, Sylvia Carrell, Li-Hsin Tseng, Jungmin Shin, and Alexander Quinn. 2020. Privacy-Preserving Face Redaction Using Crowdsourcing. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 8, 1 (2020), 13--22. https://doi.org/10.1609/hcomp.v8i1.7459
[10]
Gabriel Amaral, Alessandro Piscopo, Lucie-aimée Kaffee, Odinaldo Rodrigues, and Elena Simperl. 2021. Assessing the Quality of Sources in Wikidata Across Languages: A Hybrid Approach. J. Data and Information Quality, Vol. 13, 4, Article 23 (2021), 35 pages. https://doi.org/10.1145/3484828
[11]
Maria Anna Ambrosino, Jerry Andriessen, Vanja Annunziata, Massimo De Santo, Carmela Luciano, Mirjam Pardijs, Donato Pirozzi, and Gianluca Santangelo. 2018. Protection and Preservation of Campania Cultural Heritage Engaging Local Communities via the Use of Open Data. In Proceedings of the 19th Annual International Conference on Digital Government Research: Governance in the Data Age (dg.o '18). ACM, New York, NY, USA, Article 50, 8 pages. https://doi.org/10.1145/3209281.3209347
[12]
American Psychological Association (Ed.). 2020. Publication Manual of the American Psychological Association. The Official Guide to APA Style 7th ed.). American Psychological Association, Washington, D.C.
[13]
Ofra Amir, Yuval Shahar, Ya'akov Gal, and Litan Ilani. 2013. On the Verification Complexity of Group Decision-Making Tasks. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 1, 1 (2013), 2--8. https://doi.org/10.1609/hcomp.v1i1.13072
[14]
Samreen Anjum, Chi Lin, and Danna Gurari. 2021. CrowdMOT: Crowdsourcing Strategies for Tracking Multiple Objects in Videos. Proc. ACM Hum.-Comput. Interact., Vol. 4, CSCW3, Article 266 (2021), 25 pages. https://doi.org/10.1145/3434175
[15]
Jaime Arguello, Bogeum Choi, and Robert Capra. 2018. Factors Influencing Users' Information Requests: Medium, Target, and Extra-Topical Dimension. ACM Trans. Inf. Syst., Vol. 36, 4, Article 41 (2018), 37 pages. https://doi.org/10.1145/3209624
[16]
Agathe Balayn, Gaole He, Andrea Hu, Jie Yang, and Ujwal Gadiraju. 2022. Ready Player One! Eliciting Diverse Knowledge Using A Configurable Game. In Proceedings of the ACM Web Conference 2022. ACM, New York, NY, USA, 1709--1719. https://doi.org/10.1145/3485447.3512241
[17]
Nat a M. Barbosa and Monchu Chen. 2019. Rehumanized Crowdsourcing: A Labeling Framework Addressing Bias and Ethics in Machine Learning. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19). ACM, New York, NY, USA, 1--12. https://doi.org/10.1145/3290605.3300773
[18]
Sean Bell, Paul Upchurch, Noah Snavely, and Kavita Bala. 2013. OpenSurfaces: A Richly Annotated Catalog of Surface Appearance. ACM Trans. Graph., Vol. 32, 4, Article 111 (2013), 17 pages. https://doi.org/10.1145/2461912.2462002
[19]
David Benyon. 2013. Designing Interactive Systems: A Comprehensive Guide to HCI, UX and Interaction Design. Trans-Atlantic Publications, Inc.
[20]
Michael Borish and Benjamin Lok. 2016. Rapid Low-Cost Virtual Human Bootstrapping via the Crowd. ACM Trans. Intell. Syst. Technol., Vol. 7, 4, Article 47 (2016), 20 pages. https://doi.org/10.1145/2897366
[21]
Ria Mae Borromeo, Thomas Laurent, and Motomichi Toyama. 2016. The Influence of Crowd Type and Task Complexity on Crowdsourced Work Quality. In Proceedings of the 20th International Database Engineering & Applications Symposium (IDEAS '16). ACM, New York, NY, USA, 70--76. https://doi.org/10.1145/2938503.2938511
[22]
Jonathan Bragg, Mausam, and Daniel S. Weld. 2018. Sprout: Crowd-Powered Task Design for Crowdsourcing. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology (UIST '18). ACM, New York, NY, USA, 165--176. https://doi.org/10.1145/3242587.3242598
[23]
Robin Brewer, Meredith Ringel Morris, and Anne Marie Piper. 2016. “Why Would Anybody Do This?”: Understanding Older Adults' Motivations and Challenges in Crowd Work. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16). ACM, New York, NY, USA, 2246--2257. https://doi.org/10.1145/2858036.2858198
[24]
Steven Burrows, Martin Potthast, and Benno Stein. 2013. Paraphrase Acquisition via Crowdsourcing and Machine Learning. ACM Trans. Intell. Syst. Technol., Vol. 4, 3, Article 43 (2013), 21 pages. https://doi.org/10.1145/2483669.2483676
[25]
Michele A. Burton, Erin Brady, Robin Brewer, Callie Neylan, Jeffrey P. Bigham, and Amy Hurst. 2012. Crowdsourcing Subjective Fashion Advice Using VizWiz: Challenges and Opportunities. In Proceedings of the 14th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '12). ACM, New York, NY, USA, 135--142. https://doi.org/10.1145/2384916.2384941
[26]
Julia Cambre, Jessica Colnago, Jim Maddock, Janice Tsai, and Jofish Kaye. 2020. Choice of Voices: A Large-Scale Evaluation of Text-to-Speech Voice Quality for Long-Form Content. ACM, New York, NY, USA, 1--13. https://doi.org/10.1145/3313831.3376789
[27]
Gülcan Can, Jean-Marc Odobez, and Daniel Gatica-Perez. 2018. How to Tell Ancient Signs Apart? Recognizing and Visualizing Maya Glyphs with CNNs. J. Comput. Cult. Herit., Vol. 11, 4, Article 20 (2018), 25 pages. https://doi.org/10.1145/3230670
[28]
Mark Cartwright, Ayanna Seals, Justin Salamon, Alex Williams, Stefanie Mikloska, Duncan MacConnell, Edith Law, Juan P. Bello, and Oded Nov. 2017. Seeing Sound: Investigating the Effects of Visualizations and Complexity on Crowdsourced Audio Annotations. Proc. ACM Hum.-Comput. Interact., Vol. 1, CSCW, Article 29 (2017), 21 pages. https://doi.org/10.1145/3134664
[29]
Joseph Chee Chang, Saleema Amershi, and Ece Kamar. 2017. Revolt: Collaborative Crowdsourcing for Labeling Machine Learning Datasets. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17). ACM, New York, NY, USA, 2334--2346. https://doi.org/10.1145/3025453.3026044
[30]
Shuo Chang, Peng Dai, Jilin Chen, and Ed H. Chi. 2015. Got Many Labels? Deriving Topic Labels from Multiple Sources for Social Media Posts Using Crowdsourcing and Ensemble Learning. In Proceedings of the 24th International Conference on World Wide Web (WWW '15 Companion). ACM, New York, NY, USA, 397--406. https://doi.org/10.1145/2740908.2745401
[31]
Chen Chen, Paweł W. Wo'zniak, Andrzej Romanowski, Mohammad Obaid, Tomasz Jaworski, Jacek Kucharski, Krzysztof Grudzie'n, Shengdong Zhao, and Morten Fjeld. 2016. Using Crowdsourcing for Scientific Analysis of Industrial Tomographic Images. ACM Trans. Intell. Syst. Technol., Vol. 7, 4, Article 52 (2016), 25 pages. https://doi.org/10.1145/2897370
[32]
Justin Cheng, Jaime Teevan, and Michael S. Bernstein. 2015. Measuring Crowdsourcing Effort with Error-Time Curves. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, New York, NY, USA, 1365--1374. https://doi.org/10.1145/2702123.2702145
[33]
Parmit K. Chilana, Amy J. Ko, Jacob O. Wobbrock, and Tovi Grossman. 2013. A Multi-Site Field Study of Crowdsourced Contextual Help: Usage and Perspectives of End Users and Software Teams. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '13). ACM, New York, NY, USA, 217--226. https://doi.org/10.1145/2470654.2470685
[34]
Charles L. A. Clarke, Alexandra Vtyurina, and Mark D. Smucker. 2021. Assessing Top-k Preferences. ACM Trans. Inf. Syst., Vol. 39, 3, Article 33 (2021), 21 pages. https://doi.org/10.1145/3451161
[35]
Cihan Cobanoglu, Muhittin Cavusoglu, and Turktarhan Gozde. 2021. A beginner's guide and best practices for using crowdsourcing platforms for survey research: The case of Amazon Mechanical Turk (MTurk). Journal of Global Business Insights, Vol. 6, 1 (2021), 92--97. https://doi.org/10.5038/2640--6489.6.1.1177
[36]
Andy Cockburn, Pierre Dragicevic, Lonni Besanccon, and Carl Gutwin. 2020. Threats of a Replication Crisis in Empirical Computer Science. Commun. ACM, Vol. 63, 8 (2020), 70--79. https://doi.org/10.1145/3360311
[37]
Michael Correll and Jeffrey Heer. 2017. Regression by Eye: Estimating Trends in Bivariate Visualizations. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17). ACM, New York, NY, USA, 1387--1396. https://doi.org/10.1145/3025453.3025922
[38]
Michael Correll, Dominik Moritz, and Jeffrey Heer. 2018. Value-Suppressing Uncertainty Palettes. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). ACM, New York, NY, USA, 1--11. https://doi.org/10.1145/3173574.3174216
[39]
Crowdsourcing-code.com. 2017. Ground Rules for Paid Crowdsourcing/Crowdworking. Guideline for a prosperous and fair cooperation between crowdsourcing companies and crowdworkers. https://www.crowdsourcing-code.com/media/documents/Code_of_Conduct_EN.pdf
[40]
Florian Daniel, Pavel Kucherbaev, Cinzia Cappiello, Boualem Benatallah, and Mohammad Allahbakhsh. 2018. Quality Control in Crowdsourcing: A Survey of Quality Attributes, Assessment Techniques, and Assurance Actions. ACM Comput. Surv., Vol. 51, 1, Article 7 (2018), 40 pages. https://doi.org/10.1145/3148148
[41]
Dan Davis, Claudia Hauff, and Geert-Jan Houben. 2018. Evaluating Crowdworkers as a Proxy for Online Learners in Video-Based Learning Contexts. Proc. ACM Hum.-Comput. Interact., Vol. 2, CSCW, Article 42 (2018), 16 pages. https://doi.org/10.1145/3274311
[42]
Greg d'Eon, Joslin Goh, Kate Larson, and Edith Law. 2019. Paying Crowd Workers for Collaborative Work. Proc. ACM Hum.-Comput. Interact., Vol. 3, CSCW, Article 125 (2019), 24 pages. https://doi.org/10.1145/3359227
[43]
Nicholas J. DeVito and Ben Goldacre. 2019. Catalogue of Bias: Publication Bias. BMJ Evidence-Based Medicine, Vol. 24, 2 (2019), 53--54. https://doi.org/10.1136/bmjebm-2018--111107
[44]
Nicholas Diakopoulos, Daniel Trielli, and Grace Lee. 2021. Towards Understanding and Supporting Journalistic Practices Using Semi-Automated News Discovery Tools. Proc. ACM Hum.-Comput. Interact., Vol. 5, CSCW2, Article 406 (2021), 30 pages. https://doi.org/10.1145/3479550
[45]
Djellel Difallah, Elena Filatova, and Panos Ipeirotis. 2018. Demographics and Dynamics of Mechanical Turk Workers. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining (WSDM '18). ACM, New York, NY, USA, 135--143. https://doi.org/10.1145/3159652.3159661
[46]
Djellel Eddine Difallah, Michele Catasta, Gianluca Demartini, and Philippe Cudré-Mauroux. 2014. Scaling-up the Crowd: Micro-task Pricing Schemes for Worker Retention and Latency Improvement. In Second AAAI Conference on Human Computation and Crowdsourcing. AAAI, Palo Alto, CA, USA. https://doi.org/10.1609/hcomp.v2i1.13154
[47]
Djellel Eddine Difallah, Michele Catasta, Gianluca Demartini, Panagiotis G. Ipeirotis, and Philippe Cudré-Mauroux. 2015. The Dynamics of Micro-Task Crowdsourcing: The Case of Amazon MTurk. In Proceedings of the 24th International Conference on World Wide Web (WWW '15). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 238--247. https://doi.org/10.1145/2736277.2741685
[48]
Evanthia Dimara, Anastasia Bezerianos, and Pierre Dragicevic. 2017. Narratives in Crowdsourced Evaluation of Visualizations: A Double-Edged Sword?. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17). ACM, New York, NY, USA, 5475--5484. https://doi.org/10.1145/3025453.3025870
[49]
Steven Dow, Elizabeth Gerber, and Audris Wong. 2013. A Pilot Study of Using Crowds in the Classroom. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '13). ACM, New York, NY, USA, 227--236. https://doi.org/10.1145/2470654.2470686
[50]
Tim Draws, Alisa Rieger, Oana Inel, Ujwal Gadiraju, and Nava Tintarev. 2021. A Checklist to Combat Cognitive Biases in Crowdsourcing. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 9. AAAI, Palo Alto, CA, USA, 48--59. https://doi.org/10.1609/hcomp.v9i1.18939
[51]
John J. Dudley, Jason T. Jacques, and Per Ola Kristensson. 2019. Crowdsourcing Interface Feature Design with Bayesian Optimization. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19). ACM, New York, NY, USA, 1--12. https://doi.org/10.1145/3290605.3300482
[52]
John J. Dudley, Jason T. Jacques, and Per Ola Kristensson. 2021. Crowdsourcing Design Guidance for Contextual Adaptation of Text Content in Augmented Reality. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI '21). ACM, New York, NY, USA, Article 731, 14 pages. https://doi.org/10.1145/3411764.3445493
[53]
Dynamo Contributors. 2014. Guidelines for Academic Requesters. Version 1.1 (10/2/2014)., 25 pages. https://irb.northwestern.edu/docs/guidelinesforacademicrequesters-1.pdf
[54]
Florian Echtler and Maximilian H"außler. 2018. Open Source, Open Science, and the Replication Crisis in HCI. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 1--8. https://doi.org/10.1145/3170427.3188395
[55]
Tom Edixhoven, Sihang Qiu, Lucie Kuiper, Olivier Dikken, Gwennan Smitskamp, and Ujwal Gadiraju. 2021. Improving Reactions to Rejection in Crowdsourcing Through Self-Reflection. In Proceedings of the 13th ACM Web Science Conference 2021 (WebSci '21). ACM, New York, NY, USA, 74--83. https://doi.org/10.1145/3447535.3462482
[56]
Carsten Eickhoff. 2014. Crowd-Powered Experts: Helping Surgeons Interpret Breast Cancer Images. In Proceedings of the First International Workshop on Gamification for Information Retrieval (GamifIR '14). ACM, New York, NY, USA, 53--56. https://doi.org/10.1145/2594776.2594788
[57]
Carsten Eickhoff, Christopher G. Harris, Arjen P. de Vries, and Padmini Srinivasan. 2012. Quality through Flow and Immersion: Gamifying Crowdsourced Relevance Assessments. In Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '12). ACM, New York, NY, USA, 871--880. https://doi.org/10.1145/2348283.2348400
[58]
Irene Eleta and Jennifer Golbeck. 2012. A Study of Multilingual Social Tagging of Art Images: Cultural Bridges and Diversity. In Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work (CSCW '12). ACM, New York, NY, USA, 695--704. https://doi.org/10.1145/2145204.2145310
[59]
Shaoyang Fan, Ujwal Gadiraju, Alessandro Checco, and Gianluca Demartini. 2020. CrowdCO-OP: Sharing Risks and Rewards in Crowdsourcing. Proc. ACM Hum.-Comput. Interact., Vol. 4, CSCW2, Article 132 (2020), 24 pages. https://doi.org/10.1145/3415203
[60]
Oluwaseyi Feyisetan and Elena Simperl. 2019. Beyond Monetary Incentives: Experiments in Paid Microtask Contests. Trans. Soc. Comput., Vol. 2, 2, Article 6 (2019), 31 pages. https://doi.org/10.1145/3321700
[61]
Riccardo Fogliato, Alexandra Chouldechova, and Zachary Lipton. 2021. The Impact of Algorithmic Risk Assessments on Human Predictions and Its Analysis via Crowdsourcing Studies. Proc. ACM Hum.-Comput. Interact., Vol. 5, CSCW2, Article 428 (2021), 24 pages. https://doi.org/10.1145/3479572
[62]
Erin D. Foster and Ariel Deardorff. 2017. Open Science Framework (OSF). Journal of the Medical Library Association (JMLA), Vol. 105, 2 (2017), 203. https://doi.org/10.5195/jmla.2017.88
[63]
Ujwal Gadiraju, Alessandro Checco, Neha Gupta, and Gianluca Demartini. 2017a. Modus Operandi of Crowd Workers: The Invisible Role of Microtask Work Environments. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., Vol. 1, 3, Article 49 (2017), 29 pages. https://doi.org/10.1145/3130914
[64]
Ujwal Gadiraju and Gianluca Demartini. 2019. Understanding Worker Moods and Reactions to Rejection in Crowdsourcing. In Proceedings of the 30th ACM Conference on Hypertext and Social Media (HT '19). ACM, New York, NY, USA, 211--220. https://doi.org/10.1145/3342220.3343644
[65]
Ujwal Gadiraju, Gianluca Demartini, Ricardo Kawase, and Stefan Dietze. 2015a. Human Beyond the Machine: Challenges and Opportunities of Microtask Crowdsourcing. IEEE Intelligent Systems, Vol. 30, 4 (2015), 81--85. https://doi.org/10.1109/MIS.2015.66
[66]
Ujwal Gadiraju, Ricardo Kawase, Stefan Dietze, and Gianluca Demartini. 2015b. Understanding Malicious Behavior in Crowdsourcing Platforms: The Case of Online Surveys. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 1631--1640. https://doi.org/10.1145/2702123.2702443
[67]
Ujwal Gadiraju, Sebastian Möller, Martin Nöllenburg, Dietmar Saupe, Sebastian Egger-Lampl, Daniel Archambault, and Brian Fisher. 2017b. Crowdsourcing Versus the Laboratory: Towards Human-Centered Experiments Using the Crowd. In Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments, Daniel Archambault, Helen Purchase, and Tobias Hoßfeld (Eds.). Springer International Publishing, Cham, 6--26. https://doi.org/10.1007/978--3--319--66435--4_2
[68]
Ujwal Gadiraju, Jie Yang, and Alessandro Bozzon. 2017c. Clarity is a Worthwhile Quality: On the Role of Task Clarity in Microtask Crowdsourcing. In Proceedings of the 28th ACM Conference on Hypertext and Social Media (HT '17). ACM, New York, NY, USA, 5--14. https://doi.org/10.1145/3078714.3078715
[69]
Ujwal Gadiraju, Ran Yu, Stefan Dietze, and Peter Holtz. 2018. Analyzing Knowledge Gain of Users in Informational Search Sessions on the Web. In Proceedings of the 2018 Conference on Human Information Interaction & Retrieval (CHIIR '18). ACM, New York, NY, USA, 2--11. https://doi.org/10.1145/3176349.3176381
[70]
Barney G. Glaser and Anselm L. Strauss. 1967. The Discovery of Grounded Theory: Strategies for Qualitative Research. Aldine Transaction, Piscataway, New Jersey.
[71]
Jorge Goncalves, Denzil Ferreira, Simo Hosio, Yong Liu, Jakob Rogstadius, Hannu Kukka, and Vassilis Kostakos. 2013. Crowdsourcing on the Spot: Altruistic Use of Public Displays, Feasibility, Performance, and Behaviours. In Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp '13). ACM, New York, NY, USA, 753--762. https://doi.org/10.1145/2493432.2493481
[72]
Jorge Goncalves, Simo Hosio, Denzil Ferreira, and Vassilis Kostakos. 2014. Game of Words: Tagging Places through Crowdsourcing on Public Displays. In Proceedings of the 2014 Conference on Designing Interactive Systems (DIS '14). ACM, New York, NY, USA, 705--714. https://doi.org/10.1145/2598510.2598514
[73]
Jorge Goncalves, Simo Hosio, Maja Vukovic, and Shin'ichi Konomi. 2017. Mobile and situated crowdsourcing. International Journal of Human-Computer Studies, Vol. 102 (2017), 1--3. https://doi.org/10.1016/j.ijhcs.2016.12.001
[74]
Leo A. Goodman. 1961. Snowball Sampling. The Annals of Mathematical Statistics, Vol. 32, 1 (1961), 148--170. http://www.jstor.org/stable/2237615
[75]
Mary L. Gray and Siddharth Suri. 2019. Ghost Work. How to stop Silicon Valley from Building a New Global Underclass. Houghton Mifflin Harcourt, Boston and New York, N.Y.
[76]
Daniela Grijincu, Miguel A. Nacenta, and Per Ola Kristensson. 2014. User-Defined Interface Gestures: Dataset and Analysis. In Proceedings of the Ninth ACM International Conference on Interactive Tabletops and Surfaces (ITS '14). ACM, New York, NY, USA, 25--34. https://doi.org/10.1145/2669485.2669511
[77]
Lei Han, Kevin Roitero, Ujwal Gadiraju, Cristina Sarasua, Alessandro Checco, Eddy Maddalena, and Gianluca Demartini. 2019a. All Those Wasted Hours: On Task Abandonment in Crowdsourcing. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining. ACM, New York, NY, USA, 321--329. https://doi.org/10.1145/3289600.3291035
[78]
Lei Han, Kevin Roitero, Ujwal Gadiraju, Cristina Sarasua, Alessandro Checco, Eddy Maddalena, and Gianluca Demartini. 2019b. The Impact of Task Abandonment in Crowdsourcing. IEEE Transactions on Knowledge and Data Engineering, Vol. 33, 5 (2019), 2266--2279. https://doi.org/10.1109/TKDE.2019.2948168
[79]
Lei Han, Rudra Sawant, Shaoyang Fan, Glenn Kefford, and Gianluca Demartini. 2021. An Analysis of the Australian Political Discourse in Sponsored Social Media Content. In Proceedings of the 25th Australasian Document Computing Symposium (ADCS '21). ACM, New York, NY, USA, Article 1, 5 pages. https://doi.org/10.1145/3503516.3503533
[80]
Kotaro Hara, Abigail Adams, Kristy Milland, Saiph Savage, Chris Callison-Burch, and Jeffrey P. Bigham. 2018. A Data-Driven Analysis of Workers' Earnings on Amazon Mechanical Turk. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). ACM, New York, NY, USA, 1--14. https://doi.org/10.1145/3173574.3174023
[81]
Kotaro Hara, Shiri Azenkot, Megan Campbell, Cynthia L. Bennett, Vicki Le, Sean Pannella, Robert Moore, Kelly Minckler, Rochelle H. Ng, and Jon E. Froehlich. 2013. Improving Public Transit Accessibility for Blind Riders by Crowdsourcing Bus Stop Landmark Locations with Google Street View. In Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '13). ACM, New York, NY, USA, Article 16, 8 pages. https://doi.org/10.1145/2513383.2513448
[82]
Kotaro Hara, Shiri Azenkot, Megan Campbell, Cynthia L. Bennett, Vicki Le, Sean Pannella, Robert Moore, Kelly Minckler, Rochelle H. Ng, and Jon E. Froehlich. 2015. Improving Public Transit Accessibility for Blind Riders by Crowdsourcing Bus Stop Landmark Locations with Google Street View: An Extended Analysis. ACM Trans. Access. Comput., Vol. 6, 2, Article 5 (2015), 23 pages. https://doi.org/10.1145/2717513
[83]
Kotaro Hara, Jin Sun, Robert Moore, David Jacobs, and Jon Froehlich. 2014. Tohme: Detecting Curb Ramps in Google Street View Using Crowdsourcing, Computer Vision, and Machine Learning. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology (UIST '14). ACM, New York, NY, USA, 189--204. https://doi.org/10.1145/2642918.2647403
[84]
Chris Harrison and Haakon Faste. 2014. Implications of Location and Touch for On-Body Projected Interfaces. In Proceedings of the 2014 Conference on Designing Interactive Systems (DIS '14). ACM, New York, NY, USA, 543--552. https://doi.org/10.1145/2598510.2598587
[85]
Gaole He, Agathe Balayn, Stefan Buijsman, Jie Yang, and Ujwal Gadiraju. 2022. It is Like Finding a Polar Bear in the Savannah! Concept-level AI Explanations with Analogical Inference from Commonsense Knowledge. In Proceedings of the Conference on Human Computation and Crowdsourcing (HCOMP '22, Vol. 10). AAAI, Palo Alto, CA, USA, 89--101. https://doi.org/10.1609/hcomp.v10i1.21990
[86]
Gary T. Henry. 2002. Practical Sampling. Sage, Newbury Park.
[87]
Danula Hettiachchi, Niels van Berkel, Vassilis Kostakos, and Jorge Goncalves. 2020. CrowdCog: A Cognitive Skill Based System for Heterogeneous Task Assignment and Recommendation in Crowdsourcing. Proc. ACM Hum.-Comput. Interact., Vol. 4, CSCW2, Article 110 (2020), 22 pages. https://doi.org/10.1145/3415181
[88]
Michiel Hildebrand, Maarten Brinkerink, Riste Gligorov, Martijn van Steenbergen, Johan Huijkman, and Johan Oomen. 2013. Waisda? Video Labeling Game. In Proceedings of the 21st ACM International Conference on Multimedia (MM '13). ACM, New York, NY, USA, 823--826. https://doi.org/10.1145/2502081.2502221
[89]
Matthias Hirth, Kathrin Borchert, Fabian Allendorf, Florian Metzger, and Tobias Hoßfeld. 2019. Crowd-Based Study of Gameplay Impairments and Player Performance in DOTA 2. In Proceedings of the 4th Internet-QoE Workshop on QoE-Based Analysis and Management of Data Communication Networks (Internet-QoE'19). ACM, New York, NY, USA, 19--24. https://doi.org/10.1145/3349611.3355545
[90]
Chien-Ju Ho, Aleksandrs Slivkins, Siddharth Suri, and Jennifer Wortman Vaughan. 2015. Incentivizing High Quality Crowdwork. In Proceedings of the 24th International Conference on World Wide Web (WWW '15). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 419--429. https://doi.org/10.1145/2736277.2741102
[91]
Jonggi Hong and Leah Findlater. 2018. Identifying Speech Input Errors Through Audio-Only Interaction. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). ACM, New York, NY, USA, 1--12. https://doi.org/10.1145/3173574.3174141
[92]
Simo Hosio, Jorge Goncalves, Vili Lehdonvirta, Denzil Ferreira, and Vassilis Kostakos. 2014. Situated Crowdsourcing Using a Market Model. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology (UIST '14). ACM, New York, NY, USA, 55--64. https://doi.org/10.1145/2642918.2647362
[93]
Simo Johannes Hosio, Jaro Karppinen, Esa-Pekka Takala, Jani Takatalo, Jorge Goncalves, Niels van Berkel, Shin'ichi Konomi, and Vassilis Kostakos. 2018. Crowdsourcing Treatments for Low Back Pain. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). ACM, New York, NY, USA, 1--12. https://doi.org/10.1145/3173574.3173850
[94]
Kevin Hu, Snehalkumar 'Neil' S. Gaikwad, Madelon Hulsebos, Michiel A. Bakker, Emanuel Zgraggen, César Hidalgo, Tim Kraska, Guoliang Li, Arvind Satyanarayan, and cCaugatay Demiralp. 2019. VizNet: Towards A Large-Scale Visualization Learning and Benchmarking Repository. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19). ACM, New York, NY, USA, 1--12. https://doi.org/10.1145/3290605.3300892
[95]
Xiao Hu, Haobo Wang, Anirudh Vegesana, Somesh Dube, Kaiwen Yu, Gore Kao, Shuo-Han Chen, Yung-Hsiang Lu, George K. Thiruvathukal, and Ming Yin. 2020. Crowdsourcing Detection of Sampling Biases in Image Datasets. ACM, New York, NY, USA, 2955--2961. https://doi.org/10.1145/3366423.3380063
[96]
Shih-Wen Huang and Wai-Tat Fu. 2013. Don't Hide in the Crowd! Increasing Social Transparency between Peer Workers Improves Crowdsourcing Outcomes. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '13). ACM, New York, NY, USA, 621--630. https://doi.org/10.1145/2470654.2470743
[97]
Yi-Ching Huang, Jiunn-Chia Huang, Hao-Chuan Wang, and Jane Hsu. 2017. Supporting ESL Writing by Prompting Crowdsourced Structural Feedback. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 5, 1 (2017), 71--78. https://doi.org/10.1609/hcomp.v5i1.13313
[98]
Christoph Hube, Besnik Fetahu, and Ujwal Gadiraju. 2019. Understanding and Mitigating Worker Biases in the Crowdsourced Collection of Subjective Judgments. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19). ACM, New York, NY, USA, 1--12. https://doi.org/10.1145/3290605.3300637
[99]
Ken Hyland. 1996. Writing Without Conviction? Hedging in Science Research Articles. Applied Linguistics, Vol. 17, 4 (1996), 433--454. https://doi.org/10.1093/applin/17.4.433
[100]
Kazushi Ikeda and Michael S. Bernstein. 2016. Pay It Backward: Per-Task Payments on Crowdsourcing Platforms Reduce Productivity. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16). ACM, New York, NY, USA, 4111--4121. https://doi.org/10.1145/2858036.2858327
[101]
Junyong In. 2017. Introduction of a Pilot Study. Korean Journal of Anesthesiology, Vol. 70, 6 (2017), 601--605. https://doi.org/10.4097/kjae.2017.70.6.601
[102]
Oana Inel, Giannis Haralabopoulos, Dan Li, Christophe Van Gysel, Zoltán Szlávik, Elena Simperl, Evangelos Kanoulas, and Lora Aroyo. 2018. Studying Topical Relevance with Evidence-Based Crowdsourcing. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management (CIKM '18). ACM, New York, NY, USA, 1253--1262. https://doi.org/10.1145/3269206.3271779
[103]
Lilly C. Irani and M. Six Silberman. 2013. Turkopticon: Interrupting Worker Invisibility in Amazon Mechanical Turk. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '13). ACM, New York, NY, USA, 611--620. https://doi.org/10.1145/2470654.2470742
[104]
Kasthuri Jayarajah and Archan Misra. 2018. Predicting Episodes of Non-Conformant Mobility in Indoor Environments. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., Vol. 2, 4, Article 172 (2018), 24 pages. https://doi.org/10.1145/3287050
[105]
Hernisa Kacorri, Kaoru Shinkawa, and Shin Saito. 2014. Introducing Game Elements in Crowdsourced Video Captioning by Non-Experts. In Proceedings of the 11th Web for All Conference (W4A '14). ACM, New York, NY, USA, Article 29, 4 pages. https://doi.org/10.1145/2596695.2596713
[106]
Thivya Kandappu, Archan Misra, and Randy Tandriansyah. 2017. Collaboration Trumps Homophily in Urban Mobile Crowdsourcing. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW '17). ACM, New York, NY, USA, 902--915. https://doi.org/10.1145/2998181.2998311
[107]
Alireza Karduni, Ryan Wesslen, Isaac Cho, and Wenwen Dou. 2020. Du Bois Wrapped Bar Chart: Visualizing Categorical Data with Disproportionate Values. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI '20). ACM, New York, NY, USA, 1--12. https://doi.org/10.1145/3313831.3376365
[108]
Johannes Kiesel, Florian Kneist, Lars Meyer, Kristof Komlossy, Benno Stein, and Martin Potthast. 2020. Web Page Segmentation Revisited: Evaluation Framework and Dataset. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management (CIKM '20). ACM, New York, NY, USA, 3047--3054. https://doi.org/10.1145/3340531.3412782
[109]
Juho Kim, Phu Tran Nguyen, Sarah Weir, Philip J. Guo, Robert C. Miller, and Krzysztof Z. Gajos. 2014. Crowdsourcing Step-by-Step Information Extraction to Enhance Existing How-to Videos. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '14). ACM, New York, NY, USA, 4017--4026. https://doi.org/10.1145/2556288.2556986
[110]
Lawrence H. Kim and Sean Follmer. 2017. UbiSwarm: Ubiquitous Robotic Interfaces and Investigation of Abstract Motion as a Display. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., Vol. 1, 3, Article 66 (2017), 20 pages. https://doi.org/10.1145/3130931
[111]
Sung-Hee Kim, Hyokun Yun, and Ji Soo Yi. 2012. How to Filter out Random Clickers in a Crowdsourcing-Based Study?. In Proceedings of the 2012 BELIV Workshop: Beyond Time and Errors -- Novel Evaluation Methods for Visualization (BELIV '12). ACM, New York, NY, USA, Article 15, 7 pages. https://doi.org/10.1145/2442576.2442591
[112]
Aniket Kittur, Ed H. Chi, and Bongwon Suh. 2008. Crowdsourcing User Studies with Mechanical Turk. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '08). ACM, New York, NY, USA, 453--456. https://doi.org/10.1145/1357054.1357127
[113]
Aniket Kittur, Jeffrey V. Nickerson, Michael Bernstein, Elizabeth Gerber, Aaron Shaw, John Zimmerman, Matt Lease, and John Horton. 2013. The Future of Crowd Work. In Proceedings of the 2013 Conference on Computer Supported Cooperative Work (CSCW '13). ACM, New York, NY, USA, 1301--1318. https://doi.org/10.1145/2441776.2441923
[114]
Rachel Kohler, John Purviance, and Kurt Luther. 2017. Supporting Image Geolocation with Diagramming and Crowdsourcing. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 5, 1 (2017), 98--107. https://doi.org/10.1609/hcomp.v5i1.13296
[115]
Caitlin Kuhlman, Diana Doherty, Malika Nurbekova, Goutham Deva, Zarni Phyo, Paul-Henry Schoenhagen, MaryAnn VanValkenburg, Elke Rundensteiner, and Lane Harrison. 2019. Evaluating Preference Collection Methods for Interactive Ranking Analytics. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19). ACM, New York, NY, USA, 1--11. https://doi.org/10.1145/3290605.3300742
[116]
Anand Kulkarni, Matthew Can, and Björn Hartmann. 2012. Collaboratively Crowdsourcing Workflows with Turkomatic. In Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work (CSCW '12). ACM, New York, NY, USA, 1003--1012. https://doi.org/10.1145/2145204.2145354
[117]
Xingyu Lan, Xinyue Xu, and Nan Cao. 2021. Understanding Narrative Linearity for Telling Expressive Time-Oriented Stories. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI '21). ACM, New York, NY, USA, Article 604, 13 pages. https://doi.org/10.1145/3411764.3445344
[118]
Edith Law, Ming Yin, Joslin Goh, Kevin Chen, Michael A. Terry, and Krzysztof Z. Gajos. 2016. Curiosity Killed the Cat, but Makes Crowdwork Better. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16). ACM, New York, NY, USA, 4098--4110. https://doi.org/10.1145/2858036.2858144
[119]
Doris Jung-Lin Lee, Joanne Lo, Moonhyok Kim, and Eric Paulos. 2016. Crowdclass: Designing Classification-Based Citizen Science Learning Modules. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 4, 1 (2016), 109--118. https://doi.org/10.1609/hcomp.v4i1.13273
[120]
Michael J. Lee and Amy J. Ko. 2015. Comparing the Effectiveness of Online Learning Approaches on CS1 Learning Outcomes. In Proceedings of the Eleventh Annual International Conference on International Computing Education Research (ICER '15). ACM, New York, NY, USA, 237--246. https://doi.org/10.1145/2787622.2787709
[121]
Iolanda Leite, André Pereira, Allison Funkhouser, Boyang Li, and Jill Fain Lehman. 2016. Semi-Situated Learning of Verbal and Nonverbal Content for Repeated Human-Robot Interaction. In Proceedings of the 18th ACM International Conference on Multimodal Interaction (ICMI '16). ACM, New York, NY, USA, 13--20. https://doi.org/10.1145/2993148.2993190
[122]
Fritz Lekschas, Spyridon Ampanavos, Pao Siangliulue, Hanspeter Pfister, and Krzysztof Z. Gajos. 2021. Ask Me or Tell Me? Enhancing the Effectiveness of Crowdsourced Design Feedback. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI '21). ACM, New York, NY, USA, Article 564, 12 pages. https://doi.org/10.1145/3411764.3445507
[123]
Andrew C. Leon, Lori L. Davis, and Helena C. Kraemer. 2011. The Role and Interpretation of Pilot Studies in Clinical Research. Journal of Psychiatric Research, Vol. 45, 5 (2011), 626--629. https://doi.org/10.1016/j.jpsychires.2010.10.008
[124]
Blaine Lewis and Daniel Vogel. 2020. Longer Delays in Rehearsal-Based Interfaces Increase Expert Use. ACM Trans. Comput.-Hum. Interact., Vol. 27, 6, Article 45 (2020), 41 pages. https://doi.org/10.1145/3418196
[125]
Tianyi Li, Chandler J. Manns, Chris North, and Kurt Luther. 2019. Dropping the Baton? Understanding Errors and Bottlenecks in a Crowdsourced Sensemaking Pipeline. Proc. ACM Hum.-Comput. Interact., Vol. 3, CSCW, Article 136 (2019), 26 pages. https://doi.org/10.1145/3359238
[126]
Zhaoliang Lun, Evangelos Kalogerakis, and Alla Sheffer. 2015. Elements of Style: Learning Perceptual Shape Style Similarity. ACM Trans. Graph., Vol. 34, 4, Article 84 (2015), 14 pages. https://doi.org/10.1145/2766929
[127]
Kurt Luther, Nathan Hahn, Steven Dow, and Aniket Kittur. 2015. Crowdlines: Supporting Synthesis of Diverse Information Sources through Crowdsourced Outlines. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 3, 1 (2015), 110--119. https://doi.org/10.1609/hcomp.v3i1.13239
[128]
Ioanna Lykourentzou, Angeliki Antoniou, Yannick Naudet, and Steven P. Dow. 2016. Personality Matters: Balancing for Personality Types Leads to Better Outcomes for Crowd Teams. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing (CSCW '16). ACM, New York, NY, USA, 260--273. https://doi.org/10.1145/2818048.2819979
[129]
Malcolm MacLeod. 2021. An “Omics” Answer to the Replication Crisis. https://future.com/publomics-replication-crisis/
[130]
Eddy Maddalena, Luis-Daniel Ibá nez, and Elena Simperl. 2020. Mapping Points of Interest Through Street View Imagery and Paid Crowdsourcing. ACM Trans. Intell. Syst. Technol., Vol. 11, 5, Article 63 (2020), 28 pages. https://doi.org/10.1145/3403931
[131]
V.K. Chaithanya Manam, J. Thomas, and Alexander J. Quinn. 2022. TaskLint: Automated Detection of Ambiguities in Task Instructions. In Proceedings of the Conference on Human Computation and Crowdsourcing (HCOMP '22). AAAI, Palo Alto, CA, USA. https://doi.org/10.1609/hcomp.v10i1.21996
[132]
Andrew Mao, Ece Kamar, Yiling Chen, Eric Horvitz, Megan E Schwamb, Chris J Lintott, and Arfon M Smith. 2013. Volunteering Versus Work for Pay: Incentives and Tradeoffs In Crowdsourcing. In First AAAI Conference on Human Computation and Crowdsourcing. AAAI, Palo Alto, CA, USA. https://doi.org/10.1609/hcomp.v1i1.13075
[133]
David Martin, Benjamin V. Hanrahan, Jacki O'Neill, and Neha Gupta. 2014. Being a Turker. In Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing (CSCW '14). ACM, New York, NY, USA, 224--235. https://doi.org/10.1145/2531602.2531663
[134]
Thomas Mattauch. 2013. Innovate through Crowd Sourcing. In Proceedings of the 41st Annual ACM SIGUCCS Conference on User Services (SIGUCCS '13). ACM, New York, NY, USA, 39--42. https://doi.org/10.1145/2504776.2504796
[135]
Nora McDonald, Sarita Schoenebeck, and Andrea Forte. 2019. Reliability and Inter-Rater Reliability in Qualitative Research: Norms and Guidelines for CSCW and HCI Practice. Proc. ACM Hum.-Comput. Interact., Vol. 3, CSCW, Article 72 (2019), 23 pages. https://doi.org/10.1145/3359174
[136]
Tyler McDonnell, Matthew Lease, Mucahid Kutlu, and Tamer Elsayed. 2016. Why Is That Relevant? Collecting Annotator Rationales for Relevance Judgments. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 4, 1 (2016), 139--148. https://doi.org/10.1609/hcomp.v4i1.13287
[137]
Brian McInnis, Dan Cosley, Chaebong Nam, and Gilly Leshed. 2016. Taking a HIT: Designing Around Rejection, Mistrust, Risk, and Workers' Experiences in Amazon Mechanical Turk. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '2016). 2271--2282. https://doi.org/10.1145/2858036.2858539
[138]
Andrew J. McMinn, Yashar Moshfeghi, and Joemon M. Jose. 2013. Building a Large-Scale Corpus for Evaluating Event Detection on Twitter. In Proceedings of the 22nd ACM International Conference on Information & Knowledge Management (CIKM '13). ACM, New York, NY, USA, 409--418. https://doi.org/10.1145/2505515.2505695
[139]
Róis'in McNaney, Mohammad Othman, Dan Richardson, Paul Dunphy, Telmo Amaral, Nick Miller, Helen Stringer, Patrick Olivier, and John Vines. 2016. Speeching: Mobile Crowdsourced Speech Assessment to Support Self-Monitoring and Management for People with Parkinson's. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16). ACM, New York, NY, USA, 4464--4476. https://doi.org/10.1145/2858036.2858321
[140]
Vikram Mohanty, Kareem Abdol-Hamid, Courtney Ebersohl, and Kurt Luther. 2019. Second Opinion: Supporting Last-Mile Person Identification with Crowdsourcing and Face Recognition. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 7, 1 (2019), 86--96. https://doi.org/10.1609/hcomp.v7i1.5272
[141]
Luiz Morais, Yvonne Jansen, Nazareno Andrade, and Pierre Dragicevic. 2021. Can Anthropographics Promote Prosociality? A Review and Large-Sample Study. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI '21). ACM, New York, NY, USA, Article 611, 18 pages. https://doi.org/10.1145/3411764.3445637
[142]
Yashar Moshfeghi and Alvaro Francisco Huertas-Rosero. 2021. A Game Theory Approach for Estimating Reliability of Crowdsourced Relevance Assessments. ACM Trans. Inf. Syst., Vol. 40, 3, Article 60 (2021), 29 pages. https://doi.org/10.1145/3480965
[143]
Daniel Mutembesa, Christopher Omongo, and Ernest Mwebaze. 2018. Crowdsourcing Real-Time Viral Disease and Pest Information: A Case of Nation-Wide Cassava Disease Surveillance in a Developing Country. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 6, 1 (2018), 117--125. https://doi.org/10.1609/hcomp.v6i1.13322
[144]
Pranathi Mylavarapu, Adil Yalcin, Xan Gregg, and Niklas Elmqvist. 2019. Ranked-List Visualization: A Graphical Perception Study. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19). ACM, New York, NY, USA, 1--12. https://doi.org/10.1145/3290605.3300422
[145]
Anelise Newman, Barry McNamara, Camilo Fosco, Yun Bin Zhang, Pat Sukhum, Matthew Tancik, Nam Wook Kim, and Zoya Bylinskii. 2020. TurkEyes: A Web-Based Toolbox for Crowdsourcing Attention Data. ACM, New York, NY, USA, 1--13. https://doi.org/10.1145/3313831.3376799
[146]
Dong Nguyen, Dolf Trieschnigg, and Mariët Theune. 2014. Using Crowdsourcing to Investigate Perception of Narrative Similarity. In Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management (CIKM '14). ACM, New York, NY, USA, 321--330. https://doi.org/10.1145/2661829.2661918
[147]
Carolina Nobre, Dylan Wootton, Zach Cutler, Lane Harrison, Hanspeter Pfister, and Alexander Lex. 2021. ReVISit: Looking Under the Hood of Interactive Visualization Studies. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI '21). ACM, New York, NY, USA, Article 25, 13 pages. https://doi.org/10.1145/3411764.3445382
[148]
Zahra Nouri, Ujwal Gadiraju, Gregor Engels, and Henning Wachsmuth. 2021. What Is Unclear? Computational Assessment of Task Clarity in Crowdsourcing. In Proceedings of the 32nd ACM Conference on Hypertext and Social Media (HT '21). ACM, New York, NY, USA, 165--175. https://doi.org/10.1145/3465336.3475109
[149]
Natalya F. Noy, Jonathan Mortensen, Mark A. Musen, and Paul R. Alexander. 2013. Mechanical Turk as an Ontology Engineer? Using Microtasks as a Component of an Ontology-Engineering Workflow. In Proceedings of the 5th Annual ACM Web Science Conference (WebSci '13). ACM, New York, NY, USA, 262--271. https://doi.org/10.1145/2464464.2464482
[150]
Jonas Oppenlaender and Simo Hosio. 2019. Design Recommendations for Augmenting Creative Tasks with Computational Priming. In Proceedings of the 18th International Conference on Mobile and Ubiquitous Multimedia (MUM '19). ACM, New York, NY, USA, Article 35, 13 pages. https://doi.org/10.1145/3365610.3365621
[151]
Jonas Oppenlaender, Kristy Milland, Aku Visuri, Panos Ipeirotis, and Simo Hosio. 2020a. Creativity on Paid Crowdsourcing Platforms. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI '20). ACM, New York, NY, USA, Article 548, 14 pages. https://doi.org/10.1145/3313831.3376677
[152]
Jonas Oppenlaender, Thanassis Tiropanis, and Simo Hosio. 2020b. CrowdUI: Supporting Web Design with the Crowd. Proc. ACM Hum.-Comput. Interact., Vol. 4, EICS, Article 76 (2020), 28 pages. https://doi.org/10.1145/3394978
[153]
Jonas Oppenlaender, Aku Visuri, Kristy Milland, Panos Ipeirotis, and Simo Hosio. 2020c. What do crowd workers think about creative work?. In Workshop on Worker-Centered Design: Expanding HCI Methods for Supporting Labor. 4 pages. http://jultika.oulu.fi/files/nbnfi-fe2020052538841.pdf
[154]
Maike Paetzel, James Kennedy, Ginevra Castellano, and Jill Fain Lehman. 2018. Incremental Acquisition and Reuse of Multimodal Affective Behaviors in a Conversational Agent. In Proceedings of the 6th International Conference on Human-Agent Interaction (HAI '18). ACM, New York, NY, USA, 92--100. https://doi.org/10.1145/3284432.3284469
[155]
Anshul Vikram Pandey, Katharina Rall, Margaret L. Satterthwaite, Oded Nov, and Enrico Bertini. 2015. How Deceptive Are Deceptive Visualizations? An Empirical Analysis of Common Distortion Techniques. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, New York, NY, USA, 1469--1478. https://doi.org/10.1145/2702123.2702608
[156]
Gabriele Paolacci, Jesse Chandler, and Panagiotis G. Ipeirotis. 2010. Running Experiments on Amazon Mechanical Turk. Judgment and Decision Making, Vol. 5, 5 (2010), 411--419. https://doi.org/10.1017/S1930297500002205
[157]
Manasi Patwardhan, Abhishek Sainani, Richa Sharma, Shirish Karande, and Smita Ghaisas. 2018. Towards Automating Disambiguation of Regulations: Using the Wisdom of Crowds. ACM, New York, NY, USA, 850--855. https://doi.org/10.1145/3238147.3240727
[158]
Weiping Pei, Zhiju Yang, Monchu Chen, and Chuan Yue. 2021. Quality Control in Crowdsourcing Based on Fine-Grained Behavioral Features. Proc. ACM Hum.-Comput. Interact., Vol. 5, CSCW2, Article 442 (2021), 28 pages. https://doi.org/10.1145/3479586
[159]
Andi Peng, Besmira Nushi, Emre K?c?man, Kori Inkpen, Siddharth Suri, and Ece Kamar. 2019. What You See Is What You Get? The Impact of Representation Criteria on Human Bias in Hiring. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 7, 1 (2019), 125--134. https://doi.org/10.1609/hcomp.v7i1.5281
[160]
Mark Petticrew and Helen Roberts. 2006 a. Exploring Heterogeneity and Publication Bias. John Wiley & Sons, Ltd, Malden, MA, Chapter 7, 215--246. https://doi.org/10.1002/9780470754887.ch7
[161]
Mark Petticrew and Helen Roberts. 2006 b. Starting the Review: Refining the Question and Defining the Boundaries. John Wiley & Sons, Ltd, Chapter 2, 27--56. https://doi.org/10.1002/9780470754887.ch2
[162]
Mark Petticrew and Helen Roberts. 2006 c. Systematic Reviews in the Social Sciences. A Practical Guide. Blackwell Publishing, Malden, MA.
[163]
Rehab Qarout, Alessandro Checco, Gianluca Demartini, and Kalina Bontcheva. 2019. Platform-Related Factors in Repeatability and Reproducibility of Crowdsourcing Tasks. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 7, 1 (2019), 135--143. https://doi.org/10.1609/hcomp.v7i1.5264
[164]
Chenxi Qiu, Anna Squicciarini, Zhuozhao Li, Ce Pang, and Li Yan. 2020b. Time-Efficient Geo-Obfuscation to Protect Worker Location Privacy over Road Networks in Spatial Crowdsourcing. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management (CIKM '20). ACM, New York, NY, USA, 1275--1284. https://doi.org/10.1145/3340531.3411863
[165]
Sihang Qiu, Alessandro Bozzon, Max V Birk, and Ujwal Gadiraju. 2021. Using Worker Avatars to Improve Microtask Crowdsourcing. Proceedings of the ACM on Human-Computer Interaction, Vol. 5, CSCW2 (2021), 1--28. https://doi.org/10.1145/3476063
[166]
Sihang Qiu, Alessandro Bozzon, and Geert-Jan Houben. 2020a. VirtualCrowd: A Simulation Platform for Microtask Crowdsourcing Campaigns. ACM, New York, NY, USA, 222--225. https://doi.org/10.1145/3366424.3383546
[167]
Sarvapali D. Ramchurn, Trung Dong Huynh, Yuki Ikuno, Jack Flann, Feng Wu, Luc Moreau, Nicholas R. Jennings, Joel E. Fischer, Wenchao Jiang, Tom Rodden, Edwin Simpson, Steven Reece, and Stephen J. Roberts. 2015. HAC-ER: A Disaster Response System Based on Human-Agent Collectives. In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems (AAMAS '15). International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, 533--541.
[168]
Jorge Ram'irez, Marcos Baez, Fabio Casati, Luca Cernuzzi, and Boualem Benatallah. 2020. DREC: Towards a Datasheet for Reporting Experiments in Crowdsourcing. ACM, New York, NY, USA, 377--382. https://doi.org/10.1145/3406865.3418318
[169]
Jorge Ram'irez, Marcos Baez, Fabio Casati, Luca Cernuzzi, Boualem Benatallah, Ekaterina A. Taran, and Veronika A. Malanina. 2021a. On the Impact of Predicate Complexity in Crowdsourced Classification Tasks. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining (WSDM '21). ACM, New York, NY, USA, 67--75. https://doi.org/10.1145/3437963.3441831
[170]
Jorge Ram'irez, Burcu Sayin, Marcos Baez, Fabio Casati, Luca Cernuzzi, Boualem Benatallah, and Gianluca Demartini. 2021b. On the State of Reporting in Crowdsourcing Experiments and a Checklist to Aid Current Practices. Proc. ACM Hum.-Comput. Interact., Vol. 5, CSCW2, Article 387 (2021), 34 pages. https://doi.org/10.1145/3479531
[171]
Jorge Ramírez, Marcos Baez, Fabio Casati, and Boualem Benatallah. 2019. Understanding the Impact of Text Highlighting in Crowdsourcing Tasks. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 7, 1 (2019), 144--152. https://doi.org/10.1609/hcomp.v7i1.5268
[172]
Amy Rechkemmer and Ming Yin. 2020. Motivating Novice Crowd Workers through Goal Setting: An Investigation into the Effects on Complex Crowdsourcing Task Training. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 8, 1 (2020), 122--131. https://doi.org/10.1609/hcomp.v8i1.7470
[173]
Khairi Reda, Pratik Nalawade, and Kate Ansah-Koi. 2018. Graphical Perception of Continuous Quantitative Maps: The Effects of Spatial Frequency and Colormap Design. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). ACM, New York, NY, USA, 1--12. https://doi.org/10.1145/3173574.3173846
[174]
Janice Redish and Sharon J. Laskowsk. 2009. Guidelines for Writing Clear Instructions and Messages for Voters and Poll Workers. Technical Report NISTIR 7596. National Institute of Standards and Technology. https://www.nist.gov/publications/guidelines-writing-clear-instructions-and-messages-voters-and-poll-workers
[175]
Theodoros Rekatsinas, Amol Deshpande, and Aditya Parameswaran. 2019. CRUX: Adaptive Querying for Efficient Crowdsourced Data Extraction. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management (CIKM '19). ACM, New York, NY, USA, 841--850. https://doi.org/10.1145/3357384.3357976
[176]
Samuel Rhys Cox, Yunlong Wang, Ashraf Abdul, Christian von der Weth, and Brian Y. Lim. 2021. Directed Diversity: Leveraging Language Embedding Distances for Collective Creativity in Crowd Ideation. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI '21). ACM, New York, NY, USA, Article 393, 35 pages. https://doi.org/10.1145/3411764.3445782
[177]
Ronald E Robertson, Alexandra Olteanu, Fernando Diaz, Milad Shokouhi, and Peter Bailey. 2021. “I Can't Reply with That”: Characterizing Problematic Email Reply Suggestions. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI '21). ACM, New York, NY, USA, Article 724, 18 pages. https://doi.org/10.1145/3411764.3445557
[178]
Carlos Rodr'iguez, Florian Daniel, and Fabio Casati. 2016. Mining and Quality Assessment of Mashup Model Patterns with the Crowd: A Feasibility Study. ACM Trans. Internet Technol., Vol. 16, 3, Article 17 (2016), 27 pages. https://doi.org/10.1145/2903138
[179]
Kevin Roitero, Michael Soprano, Shaoyang Fan, Damiano Spina, Stefano Mizzaro, and Gianluca Demartini. 2020. Can The Crowd Identify Misinformation Objectively? The Effects of Judgment Scale and Assessor's Background. ACM, New York, NY, USA, 439--448. https://doi.org/10.1145/3397271.3401112
[180]
Quentin Roy, Futian Zhang, and Daniel Vogel. 2019. Automation Accuracy Is Good, but High Controllability May Be Better. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19). ACM, New York, NY, USA, 1--8. https://doi.org/10.1145/3290605.3300750
[181]
Sabirat Rubya, Joseph Numainville, and Svetlana Yarosh. 2021. Comparing Generic and Community-Situated Crowdsourcing for Data Validation in the Context of Recovery from Substance Use Disorders. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI '21). ACM, New York, NY, USA, Article 449, 17 pages. https://doi.org/10.1145/3411764.3445399
[182]
Lalit Mohan S, Priya Raman, Venkatesh Choppella, and Y. R. Reddy. 2017. A Crowdsourcing Approach for Quality Enhancement of ELearning Systems. In Proceedings of the 10th Innovations in Software Engineering Conference (ISEC '17). ACM, New York, NY, USA, 188--194. https://doi.org/10.1145/3021460.3021483
[183]
Marta Sabou, Klemens Käsznar, Markus Zlabinger, Stefan Biffl, and Dietmar Winkler. 2020. Verifying Extended Entity Relationship Diagrams with Open Tasks. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 8, 1 (2020), 132--140. https://doi.org/10.1609/hcomp.v8i1.7471
[184]
Marta Sabou, Dietmar Winkler, Peter Penzerstadler, and Stefan Biffl. 2018. Verifying Conceptual Domain Models with Human Computation: A Case Study in Software Engineering. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 6, 1 (2018), 164--173. https://doi.org/10.1609/hcomp.v6i1.13325
[185]
Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2021. WinoGrande: An Adversarial Winograd Schema Challenge at Scale. Commun. ACM, Vol. 64, 9 (2021), 99--106. https://doi.org/10.1145/3474381
[186]
Niloufar Salehi, Lilly C. Irani, Michael S. Bernstein, Ali Alkhatib, Eva Ogbe, Kristy Milland, and Clickhappier. 2015. We Are Dynamo: Overcoming Stalling and Friction in Collective Action for Crowd Workers. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, New York, NY, USA, 1621--1630. https://doi.org/10.1145/2702123.2702508
[187]
Niloufar Salehi, Jaime Teevan, Shamsi Iqbal, and Ece Kamar. 2017. Communicating Context to the Crowd for Complex Writing Tasks. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW '17). ACM, New York, NY, USA, 1890--1901. https://doi.org/10.1145/2998181.2998332
[188]
Mike Schaekermann, Joslin Goh, Kate Larson, and Edith Law. 2018. Resolvable vs. Irresolvable Disagreement: A Study on Worker Deliberation in Crowd Work. Proc. ACM Hum.-Comput. Interact., Vol. 2, CSCW, Article 154 (2018), 19 pages. https://doi.org/10.1145/3274423
[189]
Todd W. Schiller and Michael D. Ernst. 2012. Reducing the Barriers to Writing Verified Specifications. SIGPLAN Not., Vol. 47, 10 (2012), 95--112. https://doi.org/10.1145/2398857.2384624
[190]
Oliver S. Schneider, Hasti Seifi, Salma Kashani, Matthew Chun, and Karon E. MacLean. 2016. HapTurk: Crowdsourcing Affective Ratings of Vibrotactile Icons. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16). ACM, New York, NY, USA, 3248--3260. https://doi.org/10.1145/2858036.2858279
[191]
Sebastian Schäfer, David Antons, Dirk Lüttgens, Frank Piller, and Torsten Oliver Salge. 2017. Talk to Your Crowd. Research-Technology Management, Vol. 60, 4 (2017), 33--42. https://doi.org/10.1080/08956308.2017.1325689
[192]
Pao Siangliulue, Joel Chan, Steven P. Dow, and Krzysztof Z. Gajos. 2016. IdeaHound: Improving Large-Scale Collaborative Ideation with Crowd-Powered Real-Time Semantic Modeling. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology (UIST '16). ACM, New York, NY, USA, 609--624. https://doi.org/10.1145/2984511.2984578
[193]
M. S. Silberman, B. Tomlinson, R. LaPlante, J. Ross, L. Irani, and A. Zaldivar. 2018. Responsible Research with Crowds: Pay Crowdworkers at Least Minimum Wage. Commun. ACM, Vol. 61, 3 (2018), 39--41. https://doi.org/10.1145/3180492
[194]
Camelia Simoiu, Chiraag Sumanth, Alok Mysore, and Sharad Goel. 2019. Studying the “Wisdom of Crowds” at Scale. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 7, 1 (2019), 171--179. https://doi.org/10.1609/hcomp.v7i1.5271
[195]
Rachel N. Simons, Danna Gurari, and Kenneth R. Fleischmann. 2020. “I Hope This Is Helpful”: Understanding Crowdworkers' Challenges and Motivations for an Image Description Task. Proc. ACM Hum.-Comput. Interact., Vol. 4, CSCW2, Article 105 (2020), 26 pages. https://doi.org/10.1145/3415176
[196]
Elena Simperl. 2021. How to Use Crowdsourcing Effectively: Guidelines and Examples. LIBER Quarterly: The Journal of the Association of European Research Libraries, Vol. 25, 1 (2021), 18--39. https://doi.org/10.18352/lq.9948
[197]
Divit P. Singh, Lee Lisle, T. M. Murali, and Kurt Luther. 2018. CrowdLayout: Crowdsourced Design and Evaluation of Biological Network Visualizations. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). ACM, New York, NY, USA, 1--14. https://doi.org/10.1145/3173574.3173806
[198]
Kinga Skorupska, Manuel Nunez, Wieslaw Kopec, and Radoslaw Nielek. 2018. Older Adults and Crowdsourcing: Android TV App for Evaluating TEDx Subtitle Quality. Proc. ACM Hum.-Comput. Interact., Vol. 2, CSCW, Article 159 (2018), 23 pages. https://doi.org/10.1145/3274428
[199]
Stephen Smart and Danielle Albers Szafir. 2019. Measuring the Separability of Shape, Size, and Color in Scatterplots. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19). ACM, New York, NY, USA, 1--14. https://doi.org/10.1145/3290605.3300899
[200]
Marc Spicker, Franz Götz-Hahn, Thomas Lindemeier, Dietmar Saupe, and Oliver Deussen. 2019. Quantifying Visual Abstraction Quality for Computer-Generated Illustrations. ACM Trans. Appl. Percept., Vol. 16, 1, Article 5 (2019), 20 pages. https://doi.org/10.1145/3301414
[201]
Colin Stanley, Heike Winschiers-Theophilus, Michel Onwordi, and Gereon K. Kapuire. 2013. Rural Communities Crowdsource Technology Development: A Namibian Expedition. In Proceedings of the Sixth International Conference on Information and Communications Technologies and Development: Notes - Volume 2 (ICTD '13). ACM, New York, NY, USA, 155--158. https://doi.org/10.1145/2517899.2517930
[202]
Miriah Steiger, Timir J Bharucha, Sukrit Venkatagiri, Martin J. Riedl, and Matthew Lease. 2021. The Psychological Well-Being of Content Moderators: The Emotional Labor of Commercial Moderation and Avenues for Improving Support. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI '21). ACM, New York, NY, USA, Article 341, 14 pages. https://doi.org/10.1145/3411764.3445092
[203]
James Surowiecki. 2005. The Wisdom of Crowds. Anchor, New York, NY, USA.
[204]
Nathaniel Swinger, Maria De-Arteaga, Neil Thomas Heffernan IV, Mark DM Leiserson, and Adam Tauman Kalai. 2019. What Are the Biases in My Word Embedding?. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (AIES '19). ACM, New York, NY, USA, 305--311. https://doi.org/10.1145/3306618.3314270
[205]
John C. Tang, Gina Venolia, and Kori M. Inkpen. 2016. Meerkat and Periscope: I Stream, You Stream, Apps Stream for Live Streams. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16). ACM, New York, NY, USA, 4770--4780. https://doi.org/10.1145/2858036.2858374
[206]
Brandon Taylor, Anind K. Dey, Daniel Siewiorek, and Asim Smailagic. 2016. Using Crowd Sourcing to Measure the Effects of System Response Delays on User Engagement. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16). ACM, New York, NY, USA, 4413--4422. https://doi.org/10.1145/2858036.2858572
[207]
Benjamin Timmermans, Lora Aroyo, and Chris Welty. 2015. Crowdsourcing Ground Truth for Question Answering Using CrowdTruth. In Proceedings of the ACM Web Science Conference (WebSci '15). ACM, New York, NY, USA, Article 61, 2 pages. https://doi.org/10.1145/2786451.2786492
[208]
Carlos Toxtli, Siddharth Suri, and Saiph Savage. 2021. Quantifying the Invisible Labor in Crowd Work. Proc. ACM Hum.-Comput. Interact., Vol. 5, CSCW2, Article 319 (2021), 26 pages. https://doi.org/10.1145/3476060
[209]
George Trimponias, Xiaojuan Ma, and Qiang Yang. 2019. Rating Worker Skills and Task Strains in Collaborative Crowd Computing: A Competitive Perspective. In The World Wide Web Conference (WWW '19). ACM, New York, NY, USA, 1853--1863. https://doi.org/10.1145/3308558.3313569
[210]
Amos Tversky and Daniel Kahneman. 1973. Availability: A heuristic for judging frequency and probability. Cognitive Psychology (1973), 207--232. https://doi.org/10.1016/0010-0285(73)90033--9
[211]
Stephen Uzor, Jason T. Jacques, John J Dudley, and Per Ola Kristensson. 2021. Investigating the Accessibility of Crowdwork Tasks on Mechanical Turk. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI '21). ACM, New York, NY, USA, Article 381, 14 pages. https://doi.org/10.1145/3411764.3445291
[212]
Rajan Vaish, Snehalkumar (Neil) S. Gaikwad, Geza Kovacs, Andreas Veit, Ranjay Krishna, Imanol Arrieta Ibarra, Camelia Simoiu, Michael Wilber, Serge Belongie, Sharad Goel, James Davis, and Michael S. Bernstein. 2017. Crowd Research: Open and Scalable University Laboratories. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology (UIST '17). ACM, New York, NY, USA, 829--843. https://doi.org/10.1145/3126594.3126648
[213]
Rajan Vaish, Shirish Goyal, Amin Saberi, and Sharad Goel. 2018. Creating Crowdsourced Research Talks at Scale. In Proceedings of the 2018 World Wide Web Conference (WWW '18). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 1--11. https://doi.org/10.1145/3178876.3186031
[214]
Gerard van Alphen, Sihang Qiu, Alessandro Bozzon, and Geert-Jan Houben. 2020. Analyzing Workers Performance in Online Mapping Tasks Across Web, Mobile, and Virtual Reality Platforms. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 8, 1 (Oct. 2020), 141--149. https://doi.org/10.1609/hcomp.v8i1.7472
[215]
Edwin Van Teijlingen and Vanora Hundley. 2002. The Importance of Pilot Studies. Nursing Standard, Vol. 16, 40 (2002), 33. https://doi.org/10.7748/ns2002.06.16.40.33.c3214
[216]
Keith Vertanen and Per Ola Kristensson. 2014. Complementing Text Entry Evaluations with a Composition Task. ACM Trans. Comput.-Hum. Interact., Vol. 21, 2, Article 8 (2014), 33 pages. https://doi.org/10.1145/2555691
[217]
Ruben Vicente-Saez and Clara Martinez-Fuentes. 2018. Open Science Now: A Systematic Literature Review for an Integrated Definition. Journal of business research, Vol. 88 (2018), 428--436. https://doi.org/10.1016/j.jbusres.2017.12.043
[218]
Athanasios Vogogias, Daniel Archambault, Benjamin Bach, and Jessie Kennedy. 2020. Visual Encodings for Networks with Multiple Edge Types. In Proceedings of the International Conference on Advanced Visual Interfaces. ACM, New York, NY, USA, Article 37, 9 pages. https://doi.org/10.1145/3399715.3399827
[219]
Jan vom Brocke, Aalexander Simons, Kai Riemer, Bjoern Niehaves, Ralf Plattfaut, and Anne Cleven. 2015. Standing on the Shoulders of Giants: Challenges and Recommendations of Literature Search in Information Systems Research. Communications of the Association for Information Systems, Vol. 37 (2015). https://doi.org/10.17705/1CAIS.03709
[220]
Vassilios Vonikakis, Ramanathan Subramanian, Jonas Arnfred, and Stefan Winkler. 2014. Modeling Image Appeal Based on Crowd Preferences for Automated Person-Centric Collage Creation. In Proceedings of the 2014 International ACM Workshop on Crowdsourcing for Multimedia (CrowdMM '14). ACM, New York, NY, USA, 9--15. https://doi.org/10.1145/2660114.2660126
[221]
Ding Wang, Shantanu Prabhat, and Nithya Sambasivan. 2022. Whose AI Dream? In Search of the Aspiration in Data Annotation. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI '22). ACM, New York, NY, USA, Article 582, 16 pages. https://doi.org/10.1145/3491102.3502121
[222]
Nai-Ching Wang, David Hicks, and Kurt Luther. 2018. Exploring Trade-Offs Between Learning and Productivity in Crowdsourced History. Proc. ACM Hum.-Comput. Interact., Vol. 2, CSCW, Article 178 (2018), 24 pages. https://doi.org/10.1145/3274447
[223]
Yihong Wang, Konstantinos Papangelis, Ioanna Lykourentzou, Hai-Ning Liang, Irwyn Sadien, Evangelia Demerouti, and Vassilis-Javed Khan. 2020. In Their Shoes: A Structured Analysis of Job Demands, Resources, Work Experiences, and Platform Commitment of Crowdworkers in China. Proc. ACM Hum.-Comput. Interact., Vol. 4, GROUP, Article 07 (2020), 40 pages. https://doi.org/10.1145/3375187
[224]
Evan Welbourne, Pang Wu, Xuan Bao, and Emmanuel Munguia-Tapia. 2014. Crowdsourced Mobile Data Collection: Lessons Learned from a New Study Methodology. In Proceedings of the 15th Workshop on Mobile Computing Systems and Applications (HotMobile '14). ACM, New York, NY, USA, Article 2, 6 pages. https://doi.org/10.1145/2565585.2565608
[225]
Chris Welty, Lora Aroyo, Flip Korn, Sara M. McCarthy, and Shubin Zhao. 2021. Rapid Instance-Level Knowledge Acquisition for Google Maps from Class-Level Common Sense. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 9, 1 (2021), 143--154. https://doi.org/10.1609/hcomp.v9i1.18947
[226]
Miaomiao Wen, Keith Maki, Steven Dow, James D. Herbsleb, and Carolyn Rose. 2017. Supporting Virtual Team Formation through Community-Wide Deliberation. Proc. ACM Hum.-Comput. Interact., Vol. 1, CSCW, Article 109 (2017), 19 pages. https://doi.org/10.1145/3134744
[227]
Etienne Wenger. 2011. Communities of Practice: A Brief Introduction. http://hdl.handle.net/1794/11736
[228]
Mark E. Whiting, Dilrukshi Gamage, Snehalkumar (Neil) S. Gaikwad, Aaron Gilbee, Shirish Goyal, Alipta Ballav, Dinesh Majeti, Nalin Chhibber, Angela Richmond-Fuller, Freddie Vargus, Tejas Seshadri Sarma, Varshine Chandrakanthan, Teogenes Moura, Mohamed Hashim Salih, Gabriel Bayomi Tinoco Kalejaiye, Adam Ginzberg, Catherine A. Mullings, Yoni Dayan, Kristy Milland, Henrique Orefice, Jeff Regino, Sayna Parsi, Kunz Mainali, Vibhor Sehgal, Sekandar Matin, Akshansh Sinha, Rajan Vaish, and Michael S. Bernstein. 2017. Crowd Guilds: Worker-Led Reputation and Feedback on Crowdsourcing Platforms. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW '17). ACM, New York, NY, USA, 1902--1913. https://doi.org/10.1145/2998181.2998234
[229]
Mark E. Whiting, Grant Hugh, and Michael S. Bernstein. 2019. Fair Work: Crowd Work Minimum Wage with One Line of Code. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 7, 1 (2019), 197--206. https://doi.org/10.1609/hcomp.v7i1.5283
[230]
Shomir Wilson, Florian Schaub, Frederick Liu, Kanthashree Mysore Sathyendra, Daniel Smullen, Sebastian Zimmeck, Rohan Ramanath, Peter Story, Fei Liu, Norman Sadeh, and Noah A. Smith. 2018. Analyzing Privacy Policies at Scale: From Crowdsourcing to Automated Annotations. ACM Trans. Web, Vol. 13, 1, Article 1 (2018), 29 pages. https://doi.org/10.1145/3230665
[231]
Shomir Wilson, Florian Schaub, Rohan Ramanath, Norman Sadeh, Fei Liu, Noah A. Smith, and Frederick Liu. 2016. Crowdsourcing Annotations for Websites' Privacy Policies: Can It Really Work?. In Proceedings of the 25th International Conference on World Wide Web (WWW '16). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 133--143. https://doi.org/10.1145/2872427.2883035
[232]
Dietmar Winkler, Marta Sabou, Sanja Petrovic, Gisele Carneiro, Marcos Kalinowski, and Stefan Biffl. 2017. Improving Model Inspection with Crowdsourcing. In Proceedings of the 4th International Workshop on CrowdSourcing in Software Engineering (CSI-SE '17). IEEE, 30--34. https://doi.org/10.1109/CSI-SE.2017.2
[233]
Brard Winther, Michael Riegler, Lilian Calvet, Carsten Griwodz, and Pral Halvorsen. 2015. Why Design Matters: Crowdsourcing of Complex Tasks. In Proceedings of the Fourth International Workshop on Crowdsourcing for Multimedia (CrowdMM '15). ACM, New York, NY, USA, 27--32. https://doi.org/10.1145/2810188.2810190
[234]
Peng Xu and Martha Larson. 2014. Users Tagging Visual Moments: Timed Tags in Social Video. In Proceedings of the 2014 International ACM Workshop on Crowdsourcing for Multimedia (CrowdMM '14). ACM, New York, NY, USA, 57--62. https://doi.org/10.1145/2660114.2660124
[235]
Xiaotong Xu, Judith Fan, and Steven Dow. 2020. Schema and Metadata Guide the Collective Generation of Relevant and Diverse Work. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 8, 1 (2020), 178--182. https://doi.org/10.1609/hcomp.v8i1.7479
[236]
Shota Yamanaka. 2021. Utility of Crowdsourced User Experiments for Measuring the Central Tendency of User Performance to Evaluate Error-Rate Models on GUIs. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 9, 1 (2021), 155--165. https://doi.org/10.1609/hcomp.v9i1.18948
[237]
Huahai Yang, Yunyao Li, and Michelle X. Zhou. 2014. Understand Users' Comprehension and Preferences for Composing Information Visualizations. ACM Trans. Comput.-Hum. Interact., Vol. 21, 1, Article 6 (2014), 30 pages. https://doi.org/10.1145/2541288
[238]
Jie Yang, Judith Redi, Gianluca Demartini, and Alessandro Bozzon. 2016. Modeling Task Complexity in Crowdsourcing. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 4. AAAI, Palo Alto, CA, USA, 249--258. https://doi.org/10.1609/hcomp.v4i1.13283
[239]
Pinar Yelmi, Hüseyin Kucscu, and Asim Evren Yantacc. 2016. Towards a Sustainable Crowdsourced Sound Heritage Archive by Public Participation: The Soundsslike Project. In Proceedings of the 9th Nordic Conference on Human-Computer Interaction (NordiCHI '16). ACM, New York, NY, USA, Article 71, 9 pages. https://doi.org/10.1145/2971485.2971492
[240]
Ming Yin and Yiling Chen. 2015. Bonus or Not? Learn to Reward in Crowdsourcing. In Proceedings of the 24th International Conference on Artificial Intelligence (IJCAI'15). AAAI, Palo Alto, CA, USA, 201--207. https://doi.org/10.5555/2832249.2832277
[241]
Ming Yin, Mary L. Gray, Siddharth Suri, and Jennifer Wortman Vaughan. 2016. The Communication Network Within the Crowd. In Proceedings of the 25th International Conference on World Wide Web (WWW '16). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 1293--1303. https://doi.org/10.1145/2872427.2883036
[242]
Lixiu Yu, Robert E. Kraut, and Aniket Kittur. 2016. Distributed Analogical Idea Generation with Multiple Constraints. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing (CSCW '16). ACM, New York, NY, USA, 1236--1245. https://doi.org/10.1145/2818048.2835201
[243]
Ran Yu, Ujwal Gadiraju, Peter Holtz, Markus Rokicki, Philipp Kemkes, and Stefan Dietze. 2018. Predicting User Knowledge Gain in Informational Search Sessions. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval (SIGIR '18). ACM, New York, NY, USA, 75--84. https://doi.org/10.1145/3209978.3210064
[244]
Angie Zhang, Alexander Boltz, Chun Wei Wang, and Min Kyung Lee. 2022. Algorithmic Management Reimagined For Workers and By Workers: Centering Worker Well-Being in Gig Work. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI '22). ACM, New York, NY, USA, Article 14, 20 pages. https://doi.org/10.1145/3491102.3501866
[245]
Yinglong Zhang, Jin Zhang, Matthew Lease, and Jacek Gwizdka. 2014. Multidimensional Relevance Modeling via Psychometrics and Crowdsourcing. In Proceedings of the 37th International ACM SIGIR Conference on Research & Development in Information Retrieval (SIGIR '14). ACM, New York, NY, USA, 435--444. https://doi.org/10.1145/2600428.2609577
[246]
Zijian Zhang, Jaspreet Singh, Ujwal Gadiraju, and Avishek Anand. 2019. Dissonance Between Human and Machine Understanding. Proc. ACM Hum.-Comput. Interact., Vol. 3, CSCW, Article 56 (2019), 23 pages. https://doi.org/10.1145/3359158

Cited By

View all
  • (2024)Longitudinal Loyalty: Understanding The Barriers To Running Longitudinal Studies On Crowdsourcing PlatformsACM Transactions on Social Computing10.1145/36748847:1-4(1-49)Online publication date: 24-Sep-2024

Index Terms

  1. The State of Pilot Study Reporting in Crowdsourcing: A Reflection on Best Practices and Guidelines

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image Proceedings of the ACM on Human-Computer Interaction
    Proceedings of the ACM on Human-Computer Interaction  Volume 8, Issue CSCW1
    CSCW
    April 2024
    6294 pages
    EISSN:2573-0142
    DOI:10.1145/3661497
    Issue’s Table of Contents
    This work is licensed under a Creative Commons Attribution International 4.0 License.

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 26 April 2024
    Published in PACMHCI Volume 8, Issue CSCW1

    Check for updates

    Author Tags

    1. crowdsourcing
    2. pilot studies
    3. systematic literature review

    Qualifiers

    • Research-article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)280
    • Downloads (Last 6 weeks)80
    Reflects downloads up to 18 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Longitudinal Loyalty: Understanding The Barriers To Running Longitudinal Studies On Crowdsourcing PlatformsACM Transactions on Social Computing10.1145/36748847:1-4(1-49)Online publication date: 24-Sep-2024

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Login options

    Full Access

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media