Designing for Hybrid Intelligence: A Taxonomy and Survey of Crowd-Machine Interaction
<p>Iterative taxonomy development process flow (<b>a</b>) and methodological details underlying the work undertaken in this study (<b>b</b>). Adapted from Nickerson and co-authors [<a href="#B6-applsci-13-02198" class="html-bibr">6</a>].</p> "> Figure 2
<p>Taxonomy of hybrid crowd-AI systems. This taxonomic proposal integrates key conceptual dimensions of the human-centered AI framework introduced in [<a href="#B41-applsci-13-02198" class="html-bibr">41</a>] to characterize the configurations in which crowd-AI interaction occurs within the interplay between human and machine intelligence.</p> "> Figure 3
<p>Synthesis of the literature analysis based on the taxonomy proposed.</p> "> Figure 4
<p>Example of a taxonomic scheme used to classify a crowd-AI interaction scenario [<a href="#B39-applsci-13-02198" class="html-bibr">39</a>].</p> ">
Abstract
:1. Introduction and Context
2. Background and Scope
3. Methodological Approach
4. ‘Inside the Matrix’: In Pursuit of a Taxonomy for Hybrid Crowd-AI Interaction
4.1. Temporal and Spatial Axes of Crowd-AI Systems
4.2. Crowd-Machine Hybrid Task Execution and Delegation
4.3. Contextual Factors and Situational Characteristics in Crowd-Computing Arrangements
4.4. Deconstructing the Crowd Behavior Continuum in Hybrid Crowd-Machine Supported Environments
4.5. Hybrid Intelligence Systems at a Crowd Scale: An Infrastructural Viewpoint
4.6. ‘Rebuilding from the Ruins’: Hybrid Crowd-Artificial Intelligence and Its Social-Ethical Caveats
5. Validation and Assessment of the Proposed Taxonomy
6. Concluding Discussion and Challenges Ahead
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A
ID | Author(s) | Year | Title |
---|---|---|---|
P1 | Huang et al. | 2018 | Evorus: A crowd-powered conversational assistant built to automate itself over time |
P2 | Kaspar et al. | 2018 | Crowd-guided ensembles: How can we choreograph crowd workers for video segmentation? |
P3 | Guo et al. | 2018 | Crowd-AI camera sensing in the real world |
P4 | Nushi et al. | 2018 | Towards accountable AI: Hybrid human-machine analyses for characterizing system failure |
P5 | Krivosheev et al. | 2018 | Combining crowd and machines for multi-predicate item screening |
P6 | Chan et al. | 2018 | SOLVENT: A mixed initiative system for finding analogies between research papers |
P7 | Yang et al. | 2019 | Scalpel-CD: Leveraging crowdsourcing and deep probabilistic modeling for debugging noisy training data |
P8 | Trouille et al. | 2019 | Citizen science frontiers: Efficiency, engagement, and serendipitous discovery with human-machine systems |
P9 | Park et al. | 2019 | AI-based request augmentation to increase crowdsourcing participation |
P10 | Kittur et al. | 2019 | Scaling up analogical innovation with crowds and AI |
P11 | Mohanty et al. | 2020 | Photo Sleuth: Identifying historical portraits with face recognition and crowdsourced human expertise |
P12 | Zhang et al. | 2020 | Crowd-assisted disaster scene assessment with human-AI interactive attention |
P13 | Zhang et al. | 2021 | CollabLearn: An uncertainty-aware crowd-AI collaboration system for cultural heritage damage assessment |
P14 | Kobayashi et al. | 2021 | Human+AI crowd task assignment considering result quality requirements |
P15 | Palmer et al. | 2021 | Citizen science, computing, and conservation: How can “Crowd AI” change the way we tackle large-scale ecological challenges? |
P16 | Anjum et al. | 2021 | Exploring the use of deep learning with crowdsourcing to annotate images |
P17 | Zhang et al. | 2021 | StreamCollab: A streaming crowd-AI collaborative system to smart urban infrastructure monitoring in social sensing |
P18 | Lemmer et al. | 2021 | Crowdsourcing more effective initializations for single-target trackers through automatic re-querying |
P19 | Groh et al. | 2022 | Deepfake detection by human crowds, machines, and machine-informed crowds |
P20 | Zhang et al. | 2022 | On streaming disaster damage assessment in social sensing: A crowd-driven dynamic neural architecture searching approach |
P21 | Kou et al. | 2022 | Crowd, expert & AI: A human-AI interactive approach towards natural language explanation based COVID-19 misinformation detection |
P22 | Guo et al. | 2022 | CrowdHMT: Crowd intelligence with the deep fusion of human, machine, and IoT |
P23 | Wang et al. | 2022 | Graph optimized data offloading for crowd-AI hybrid urban tracking in intelligent transportation systems |
P24 | Gal et al. | 2022 | A new workflow for human-AI collaboration in citizen science |
P25 | Zhang et al. | 2022 | CrowdOptim: A crowd-driven neural network hyperparameter optimization approach to AI-based smart urban sensing |
Conference Proceedings | AAAI Conference on Artificial Intelligence |
AAAI Conference on Human Computation and Crowdsourcing (4) | |
ACM Conference on Human Factors in Computing Systems (3) | |
ACM Conference on Information Technology for Social Good | |
ACM Web Conference | |
International Joint Conference on Artificial Intelligence | |
Journal/Transactions | ACM Transactions on Interactive Intelligent Systems |
Human Computation (2) | |
IEEE Internet of Things Journal | |
IEEE Transactions on Computational Social Systems | |
IEEE Transactions on Intelligent Transportation Systems | |
Knowledge-Based Systems | |
Proceedings of the ACM on Human-Computer Interaction (3) | |
Proceedings of the ACM on Interactive, Mobile, Wearable, and Ubiquitous Technologies | |
Proceedings of the National Academy of Sciences (3) |
References
- Lofi, C.; El Maarry, K. Design patterns for hybrid algorithmic-crowdsourcing workflows. In Proceedings of the IEEE 16th Conference on Business Informatics, Geneva, Switzerland, 14–17 July 2014; pp. 1–8. [Google Scholar]
- Heim, E.; Roß, T.; Seitel, A.; März, K.; Stieltjes, B.; Eisenmann, M.; Lebert, J.; Metzger, J.; Sommer, G.; Sauter, A.W.; et al. Large-scale medical image annotation with crowd-powered algorithms. J. Med. Imaging 2018, 5, 034002. [Google Scholar] [CrossRef] [PubMed]
- Vargas-Santiago, M.; Monroy, R.; Ramirez-Marquez, J.E.; Zhang, C.; Leon-Velasco, D.A.; Zhu, H. Complementing solutions to optimization problems via crowdsourcing on video game plays. Appl. Sci. 2020, 10, 8410. [Google Scholar] [CrossRef]
- Bharadwaj, A.; Gwizdala, D.; Kim, Y.; Luther, K.; Murali, T.M. Flud: A hybrid crowd–algorithm approach for visualizing biological networks. ACM Trans. Comput. Interact. 2022, 29, 1–53. [Google Scholar] [CrossRef]
- Grudin, J.; Poltrock, S. Taxonomy and theory in computer supported cooperative work. Oxf. Handb. Organ. Psychol. 2012, 2, 1323–1348. [Google Scholar] [CrossRef]
- Nickerson, R.C.; Varshney, U.; Muntermann, J. A method for taxonomy development and its application in information systems. Eur. J. Inf. Syst. 2013, 22, 336–359. [Google Scholar] [CrossRef]
- Harris, A.M.; Gómez-Zará, D.; DeChurch, L.A.; Contractor, N.S. Joining together online: The trajectory of CSCW scholarship on group formation. Proc. ACM Hum.-Comput. Interact. 2019, 3, 1–27. [Google Scholar] [CrossRef]
- McGrath, J.E. Groups: Interaction and Performance; Prentice-Hall: Englewood Cliffs, NJ, USA, 1984. [Google Scholar]
- Shaw, M.E. Scaling group tasks: A method for dimensional analysis. JSAS Cat. Sel. Doc. Psychol. 1973, 3, 8. [Google Scholar]
- Modaresnezhad, M.; Iyer, L.; Palvia, P.; Taras, V. Information technology (IT) enabled crowdsourcing: A conceptual framework. Inf. Process. Manag. 2020, 57, 102135. [Google Scholar] [CrossRef]
- Bhatti, S.S.; Gao, X.; Chen, G. General framework, opportunities and challenges for crowdsourcing techniques: A comprehensive survey. J. Syst. Softw. 2020, 167, 110611. [Google Scholar] [CrossRef]
- Johansen, R. Groupware: Computer Support for Business Teams; The Free Press: New York, NY, USA, 1988. [Google Scholar]
- Lee, C.P.; Paine, D. From the matrix to a model of coordinated action (MoCA): A conceptual framework of and for CSCW. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing, Vancouver, BC, Canada, 14–18 March 2015; pp. 179–194. [Google Scholar]
- Renyi, M.; Gaugisch, P.; Hunck, A.; Strunck, S.; Kunze, C.; Teuteberg, F. Uncovering the complexity of care networks—Towards a taxonomy of collaboration complexity in homecare. Comput. Support. Cooperative Work. (CSCW) 2022, 31, 517–554. [Google Scholar] [CrossRef]
- Thomer, A.K.; Twidale, M.B.; Yoder, M.J. Transforming taxonomic interfaces: “Arm’s length” cooperative work and the maintenance of a long-lived classification system. Proc. ACM Hum.-Comput. Interact. 2018, 2, 1–23. [Google Scholar] [CrossRef]
- Akata, Z.; Balliet, D.; de Rijke, M.; Dignum, F.; Dignum, V.; Eiben, G.; Fokkens, A.; Grossi, D.; Hindriks, K.V.; Hoos, H.H.; et al. A research agenda for hybrid intelligence: Augmenting human intellect with collaborative, adaptive, responsible, and explainable artificial intelligence. Computer 2020, 53, 18–28. [Google Scholar] [CrossRef]
- Pescetelli, N. A brief taxonomy of hybrid intelligence. Forecasting 2021, 3, 633–643. [Google Scholar] [CrossRef]
- Dellermann, D.; Calma, A.; Lipusch, N.; Weber, T.; Weigel, S.; Ebel, P. The future of human-AI collaboration: A taxonomy of design knowledge for hybrid intelligence systems. In Proceedings of the 52nd Hawaii International Conference on System Sciences, Maui, HI, USA, 8–11 January 2019; pp. 274–283. [Google Scholar]
- Dubey, A.; Abhinav, K.; Jain, S.; Arora, V.; Puttaveerana, A. HACO: A framework for developing human-AI teaming. In Proceedings of the 13th Innovations in Software Engineering Conference, Jabalpur, India, 27–29 February 2020; pp. 1–9. [Google Scholar]
- Littmann, M.; Suomela, T. Crowdsourcing, the great meteor storm of 1833, and the founding of meteor science. Endeavour 2014, 38, 130–138. [Google Scholar] [CrossRef]
- Corney, J.R.; Torres-Sánchez, C.; Jagadeesan, A.P.; Regli, W.C. Outsourcing labour to the cloud. Int. J. Innovation Sustain. Dev. 2009, 4, 294–313. [Google Scholar] [CrossRef]
- Rouse, A.C. A preliminary taxonomy of crowdsourcing. In Proceedings of the Australasian Conference on Information Systems, Brisbane, Australia, 1–3 December 2010; Volume 76. [Google Scholar]
- Malone, T.W.; Laubacher, R.; Dellarocas, C. The collective intelligence genome. IEEE Eng. Manag. Rev. 2010, 38, 38–52. [Google Scholar] [CrossRef]
- Zwass, V. Co-creation: Toward a taxonomy and an integrated research perspective. Int. J. Electron. Commer. 2010, 15, 11–48. [Google Scholar] [CrossRef]
- Doan, A.; Ramakrishnan, R.; Halevy, A.Y. Crowdsourcing systems on the world-wide web. Commun. ACM 2011, 54, 86–96. [Google Scholar] [CrossRef]
- Saxton, G.D.; Oh, O.; Kishore, R. Rules of crowdsourcing: Models, issues, and systems of control. Inf. Syst. Management 2013, 30, 2–20. [Google Scholar] [CrossRef]
- Nakatsu, R.T.; Grossman, E.B.; Iacovou, C.L. A taxonomy of crowdsourcing based on task complexity. J. Inf. Sci. 2014, 40, 823–834. [Google Scholar] [CrossRef]
- Gadiraju, U.; Kawase, R.; Dietze, S. A taxonomy of microtasks on the web. In Proceedings of the 25th ACM Conference on Hypertext and Social Media, Santiago, Chile, 1–4 September 2014; pp. 218–223. [Google Scholar]
- Hosseini, M.; Shahri, A.; Phalp, K.; Taylor, J.; Ali, R. Crowdsourcing: A taxonomy and systematic mapping study. Comput. Sci. Rev. 2015, 17, 43–69. [Google Scholar] [CrossRef] [Green Version]
- Alabduljabbar, R.; Al-Dossari, H. Towards a classification model for tasks in crowdsourcing. In Proceedings of the Second International Conference on Internet of Things and Cloud Computing, Cambridge, UK, 22–23 March 2017; pp. 1–7. [Google Scholar]
- Chen, Q.; Magnusson, M.; Björk, J. Exploring the effects of problem- and solution-related knowledge sharing in internal crowdsourcing. J. Knowl. Manag. 2022, 26, 324–347. [Google Scholar] [CrossRef]
- Chilton, L.B.; Little, G.; Edge, D.; Weld, D.S.; Landay, J.A. Cascade: Crowdsourcing taxonomy creation. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Paris, France, 27 April–2 May 2013; pp. 1999–2008. [Google Scholar]
- Sharif, A.; Gopal, P.; Saugstad, M.; Bhatt, S.; Fok, R.; Weld, G.; Dey, K.A.M.; Froehlich, J.E. Experimental crowd+AI approaches to track accessibility features in sidewalk intersections over time. In Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility, Virtual Event, 18–22 October 2021; pp. 1–5. [Google Scholar]
- Zhang, D.Y.; Huang, Y.; Zhang, Y.; Wang, D. Crowd-assisted disaster scene assessment with human-AI interactive attention. In Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; pp. 2717–2724. [Google Scholar]
- Kaspar, A.; Patterson, G.; Kim, C.; Aksoy, Y.; Matusik, W.; Elgharib, M. Crowd-guided ensembles: How can we choreograph crowd workers for video segmentation? In Proceedings of the CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018. [Google Scholar]
- Zhang, Y.; Zong, R.; Kou, Z.; Shang, L.; Wang, D. CollabLearn: An uncertainty-aware crowd-AI collaboration system for cultural heritage damage assessment. IEEE Trans. Comput. Soc. Syst. 2021, 9, 1515–1529. [Google Scholar] [CrossRef]
- Maier-Hein, L.; Ross, T.; Gröhl, J.; Glocker, B.; Bodenstedt, S.; Stock, C.; Heim, E.; Götz, M.; Wirkert, S.J.; Kenngott, H.; et al. Crowd-algorithm collaboration for large-scale endoscopic image annotation with confidence. In Proceedings of the 19th International Conference on Medical Image Computing and Computer-Assisted Intervention, Athens, Greece, 17–21 October 2016; pp. 616–623. [Google Scholar]
- Mohanty, V.; Thames, D.; Mehta, S.; Luther, K. Photo Sleuth: Combining human expertise and face recognition to identify historical portraits. In Proceedings of the 24th International Conference on Intelligent User Interfaces, Marina del Ray, CA, USA, 17–20 March 2019; pp. 547–557. [Google Scholar]
- Huang, T.H.; Chang, J.C.; Bigham, J.P. Evorus: A crowd-powered conversational assistant built to automate itself over time. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018; p. 295. [Google Scholar]
- Guo, A.; Jain, A.; Ghose, S.; Laput, G.; Harrison, C.; Bigham, J.P. Crowd-AI camera sensing in the real world. Proc. ACM Interactive, Mobile, Wearable Ubiquitous Technol. 2018, 2, 1–20. [Google Scholar] [CrossRef]
- Correia, A.; Paredes, H.; Schneider, D.; Jameel, S.; Fonseca, B. Towards hybrid crowd-AI centered systems: Developing an integrated framework from an empirical perspective. In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, Bari, Italy, 6–9 October 2019; pp. 4013–4018. [Google Scholar]
- Xu, W.; Dainoff, M.J.; Ge, L.; Gao, Z. Transitioning to human interaction with AI systems: New challenges and opportunities for HCI professionals to enable human-centered AI. Int. J. Human–Computer Interact. 2022, 39, 494–518. [Google Scholar] [CrossRef]
- Colazo, M.; Alvarez-Candal, A.; Duffard, R. Zero-phase angle asteroid taxonomy classification using unsupervised machine learning algorithms. Astron. Astrophys. 2022, 666, A77. [Google Scholar] [CrossRef]
- Mock, F.; Kretschmer, F.; Kriese, A.; Böcker, S.; Marz, M. Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proc. Natl. Acad. Sci. USA 2022, 119, e2122636119. [Google Scholar] [CrossRef]
- Rasch, R.F. The nature of taxonomy. Image J. Nurs. Scholarsh. 1987, 19, 147–149. [Google Scholar] [CrossRef]
- Tricco, A.C.; Lillie, E.; Zarin, W.; O’Brien, K.; Colquhoun, H.; Kastner, M.; Levac, D.; Ng, C.; Sharpe, J.P.; Wilson, K.; et al. A scoping review on the conduct and reporting of scoping reviews. BMC Med. Res. Methodol. 2016, 16, 15. [Google Scholar] [CrossRef]
- Sokal, R.R. Phenetic taxonomy: Theory and methods. Annu. Rev. Ecol. Syst. 1986, 17, 423–442. [Google Scholar] [CrossRef]
- Oberländer, A.M.; Lösser, B.; Rau, D. Taxonomy research in information systems: A systematic assessment. In Proceedings of the 27th European Conference on Information Systems, Stockholm and Uppsala, Sweden, 8–14 June 2019. [Google Scholar]
- Gerber, A. Computational ontologies as classification artifacts in IS research. In Proceedings of the 24th Americas Conference on Information Systems, New Orleans, LA, USA, 16–18 August 2018. [Google Scholar]
- Webster, J.; Watson, R.T. Analyzing the past to prepare for the future: Writing a literature review. MIS Q. 2002, 26, 2. [Google Scholar]
- Schmidt-Kraepelin, M.; Thiebes, S.; Tran, M.C.; Sunyaev, A. What’s in the game? Developing a taxonomy of gamification concepts for health apps. In Proceedings of the 51st Hawaii International Conference on System Sciences, Hilton Waikoloa Village, HI, USA, 3–6 January 2018; pp. 1–10. [Google Scholar]
- Sai, A.R.; Buckley, J.; Fitzgerald, B.; Le Gear, A. Taxonomy of centralization in public blockchain systems: A systematic literature review. Inf. Process. Manag. 2021, 58, 102584. [Google Scholar] [CrossRef]
- Andraschko, L.; Wunderlich, P.; Veit, D.; Sarker, S. Towards a taxonomy of smart home technology: A preliminary understanding. In Proceedings of the 42nd International Conference on Information Systems, Austin, TX, USA, 12–15 December 2021. [Google Scholar]
- Larsen, K.R.; Hovorka, D.; Dennis, A.; West, J.D. Understanding the elephant: The discourse approach to boundary identification and corpus construction for theory review articles. J. Assoc. Inf. Syst. 2019, 20, 15. [Google Scholar] [CrossRef]
- Elliott, J.H.; Turner, T.; Clavisi, O.; Thomas, J.; Higgins, J.P.T.; Mavergames, C.; Gruen, R.L. Living systematic reviews: An emerging opportunity to narrow the evidence-practice gap. PLoS Med. 2014, 11, e1001603. [Google Scholar] [CrossRef]
- Singh, V.K.; Singh, P.; Karmakar, M.; Leta, J.; Mayr, P. The journal coverage of Web of Science, Scopus and Dimensions: A comparative analysis. Scientometrics 2021, 126, 5113–5142. [Google Scholar] [CrossRef]
- Kittur, A.; Nickerson, J.V.; Bernstein, M.; Gerber, E.; Shaw, A.; Zimmerman, J.; Lease, M.; Horton, J.J. The future of crowd work. In Proceedings of the ACM Conference on Computer Supported Cooperative Work, San Antonio, TX, USA, 23–27 February 2013; pp. 1301–1318. [Google Scholar]
- Zhang, D.; Zhang, Y.; Li, Q.; Plummer, T.; Wang, D. CrowdLearn: A crowd-AI hybrid system for deep learning-based damage assessment applications. In Proceedings of the 39th IEEE International Conference on Distributed Computing Systems, Dallas, TX, USA, 7–10 July 2019; pp. 1221–1232. [Google Scholar]
- Landolt, S.; Wambsganss, T.; Söllner, M. A taxonomy for deep learning in natural language processing. In Proceedings of the 54th Hawaii International Conference on System Sciences, Kauai, HI, USA, 5 January 2021; pp. 1061–1070. [Google Scholar]
- Straus, S.G. Testing a typology of tasks: An empirical validation of McGrath’s (1984) group task circumplex. Small Group Research 1999, 30, 166–187. [Google Scholar] [CrossRef]
- Chesbrough, H.W. Open Innovation: The New Imperative for Creating and Profiting from Technology; Harvard Business Press: Boston, MA, USA, 2003. [Google Scholar]
- Karachiwalla, R.; Pinkow, F. Understanding crowdsourcing projects: A review on the key design elements of a crowdsourcing initiative. Creativity Innov. Manag. 2021, 30, 563–584. [Google Scholar] [CrossRef]
- Hemmer, P.; Schemmer, M.; Vössing, M.; Kühl, N. Human-AI complementarity in hybrid intelligence systems: A structured literature review. In Proceedings of the 25th Pacific Asia Conference on Information Systems, Virtual Event, Dubai, United Arab Emirates, 12–14 July 2021; p. 78. [Google Scholar]
- Holstein, K.; Aleven, V.; Rummel, N. A conceptual framework for human-AI hybrid adaptivity in education. In Proceedings of the 21st International Conference on Artificial Intelligence in Education, Ifrane, Morocco, 6–10 July 2020; pp. 240–254. [Google Scholar]
- Siemon, D. Elaborating team roles for artificial intelligence-based teammates in human-AI collaboration. Group Decis. Negot. 2022, 31, 871–912. [Google Scholar] [CrossRef]
- Weber, E.; Marzo, N.; Papadopoulos, D.P.; Biswas, A.; Lapedriza, A.; Ofli, F.; Imran, M.; Torralba, A. Detecting natural disasters, damage, and incidents in the wild. In Proceedings of the 16th European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; pp. 331–350. [Google Scholar]
- Vaughan, J.W. Making better use of the crowd: How crowdsourcing can advance machine learning research. J. Mach. Learn. Res. 2017, 18, 7026–7071. [Google Scholar]
- Hamadi, R.; Ghazzai, H.; Massoud, Y. A generative adversarial network for financial advisor recruitment in smart crowdsourcing platforms. Appl. Sci. 2022, 12, 9830. [Google Scholar] [CrossRef]
- Alter, S. Work system theory: Overview of core concepts, extensions, and challenges for the future. J. Assoc. Inf. Syst. 2013, 14, 2. [Google Scholar] [CrossRef]
- Venumuddala, V.R.; Kamath, R. Work systems in the Indian information technology (IT) industry delivering artificial intelligence (AI) solutions and the challenges of work from home. Inf. Syst. Front. 2022, 1–25. [Google Scholar] [CrossRef] [PubMed]
- Nardi, B. Context and Consciousness: Activity Theory and Human-Computer Interaction; MIT Press: Cambridge, MA, USA, 1996. [Google Scholar]
- Neale, D.C.; Carroll, J.M.; Rosson, M.B. Evaluating computer-supported cooperative work: Models and frameworks. In Proceedings of the ACM Conference on Computer Supported Cooperative Work, Chicago, IL, USA, 6–10 November 2004; pp. 112–121. [Google Scholar]
- Lee, S.W.; Krosnick, R.; Park, S.Y.; Keelean, B.; Vaidya, S.; O’Keefe, S.D.; Lasecki, W.S. Exploring real-time collaboration in crowd-powered systems through a UI design tool. Proc. ACM Human-Computer Interact. 2018, 2, 1–23. [Google Scholar] [CrossRef]
- Wang, X.; Ding, L.; Wang, Q.; Xie, J.; Wang, T.; Tian, X.; Guan, Y.; Wang, X. A picture is worth a thousand words: Share your real-time view on the road. IEEE Trans. Veh. Technol. 2016, 66, 2902–2914. [Google Scholar] [CrossRef]
- Agapie, E.; Teevan, J.; Monroy-Hernández, A. Crowdsourcing in the field: A case study using local crowds for event reporting. In Proceedings of the Third AAAI Conference on Human Computation and Crowdsourcing, San Diego, CA, USA, 8–11 November 2015; pp. 2–11. [Google Scholar]
- Lafreniere, B.J.; Grossman, T.; Anderson, F.; Matejka, J.; Kerrick, H.; Nagy, D.; Vasey, L.; Atherton, E.; Beirne, N.; Coelho, M.H.; et al. Crowdsourced fabrication. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology, Tokyo, Japan, 16–19 October 2016; pp. 15–28. [Google Scholar]
- Aristeidou, M.; Scanlon, E.; Sharples, M. Profiles of engagement in online communities of citizen science participation. Comput. Hum. Behav. 2017, 74, 246–256. [Google Scholar] [CrossRef]
- Bouwer, A. Under which conditions are humans motivated to delegate tasks to AI? A taxonomy on the human emotional state driving the motivation for AI delegation. In Marketing and Smart Technologies; Springer: Singapore, 2022; pp. 37–53. [Google Scholar]
- Lubars, B.; Tan, C. Ask not what AI can do, but what AI should do: Towards a framework of task delegability. In Proceedings of the Annual Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 8–14 December 2019; pp. 57–67. [Google Scholar]
- Sun, Y.; Ma, X.; Ye, K.; He, L. Investigating crowdworkers’ identify, perception and practices in micro-task crowdsourcing. Proc. ACM Hum.-Comput. Interact. 2022, 6, 1–20. [Google Scholar] [CrossRef]
- Khan, V.J.; Papangelis, K.; Lykourentzou, I.; Markopoulos, P. Macrotask Crowdsourcing—Engaging the Crowds to Address Complex Problems; Human-Computer Interaction Series; Springer: Cham, Switzerland, 2019. [Google Scholar]
- Teevan, J. The future of microwork. XRDS Crossroads ACM Mag. Stud. 2016, 23, 26–29. [Google Scholar] [CrossRef]
- Zulfiqar, M.; Malik, M.N.; Khan, H.H. Microtasking activities in crowdsourced software development: A systematic literature review. IEEE Access 2022, 10, 24721–24737. [Google Scholar] [CrossRef]
- Rahman, H.; Roy, S.B.; Thirumuruganathan, S.; Amer-Yahia, S.; Das, G. Optimized group formation for solving collaborative tasks. VLDB J. 2018, 28, 1–23. [Google Scholar] [CrossRef]
- Schmitz, H.; Lykourentzou, I. Online sequencing of non-decomposable macrotasks in expert crowdsourcing. ACM Trans. Soc. Comput. 2018, 1, 1–33. [Google Scholar] [CrossRef]
- Jin, Y.; Carman, M.; Zhu, Y.; Xiang, Y. A technical survey on statistical modelling and design methods for crowdsourcing quality control. Artif. Intell. 2020, 287, 103351. [Google Scholar] [CrossRef]
- Moayedikia, A.; Ghaderi, H.; Yeoh, W. Optimizing microtask assignment on crowdsourcing platforms using Markov chain Monte Carlo. Decis. Support Syst. 2020, 139, 113404. [Google Scholar] [CrossRef]
- Amershi, S.; Weld, D.; Vorvoreanu, M.; Fourney, A.; Nushi, B.; Collisson, P.; Suh, J.; Iqbal, S.T.; Bennett, P.N.; Inkpen, K.; et al. Guidelines for human-AI interaction. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Glasgow, Scotland, UK, 4–9 May 2019. [Google Scholar]
- Rafner, J.; Gajdacz, M.; Kragh, G.; Hjorth, A.; Gander, A.; Palfi, B.; Berditchevskiaia, A.; Grey, F.; Gal, K.; Segal, A.; et al. Mapping citizen science through the lens of human-centered AI. Hum. Comput. 2022, 9, 66–95. [Google Scholar] [CrossRef]
- Shneiderman, B. Bridging the gap between ethics and practice: Guidelines for reliable, safe, and trustworthy human-centered AI systems. ACM Trans. Interact. Intell. Syst. 2020, 10, 1–31. [Google Scholar] [CrossRef]
- Ramírez, J.; Sayin, B.; Baez, M.; Casati, F.; Cernuzzi, L.; Benatallah, B.; Demartini, G. On the state of reporting in crowdsourcing experiments and a checklist to aid current practices. Proc. ACM Hum.-Comput. Interact. 2021, 5, 1–34. [Google Scholar] [CrossRef]
- Robert, L.; Romero, D.M. Crowd size, diversity and performance. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, Seoul, Republic of Korea, 18–23 April 2015; pp. 1379–1382. [Google Scholar]
- Blandford, A. Intelligent interaction design: The role of human-computer interaction research in the design of intelligent systems. Expert Syst. 2001, 18, 3–18. [Google Scholar] [CrossRef]
- Huang, K.; Zhou, J.; Chen, S. Being a solo endeavor or team worker in crowdsourcing contests? It is a long-term decision you need to make. Proc. ACM Hum.-Comput. Interact. 2022, 6, 1–32. [Google Scholar] [CrossRef]
- Venkatagiri, S.; Thebault-Spieker, J.; Kohler, R.; Purviance, J.; Mansur, R.S.; Luther, K. GroundTruth: Augmenting expert image geolocation with crowdsourcing and shared representations. Proc. ACM Hum.-Comput. Interact. 2019, 3, 1–30. [Google Scholar] [CrossRef]
- Zhou, S.; Valentine, M.; Bernstein, M.S. In search of the dream team: Temporally constrained multi-armed bandits for identifying effective team structures. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018. [Google Scholar]
- Gray, M.L.; Suri, S.; Ali, S.S.; Kulkarni, D. The crowd is a collaborative network. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing, San Francisco, CA, USA, 27 February–2 March 2016; pp. 134–147. [Google Scholar]
- Zhang, X.; Zhang, W.; Zhao, Y.; Zhu, Q. Imbalanced volunteer engagement in cultural heritage crowdsourcing: A task-related exploration based on causal inference. Inf. Process. Manag. 2022, 59, 103027. [Google Scholar] [CrossRef]
- McNeese, N.J.; Demir, M.; Cooke, N.J.; She, M. Team situation awareness and conflict: A study of human–machine teaming. J. Cogn. Eng. Decis. Mak. 2021, 15, 83–96. [Google Scholar] [CrossRef]
- Dafoe, A.; Bachrach, Y.; Hadfield, G.; Horvitz, E.; Larson, K.; Graepel, T. Cooperative AI: Machines must learn to find common ground. Nature 2021, 593, 33–36. [Google Scholar] [CrossRef]
- Alorwu, A.; Savage, S.; van Berkel, N.; Ustalov, D.; Drutsa, A.; Oppenlaender, J.; Bates, O.; Hettiachchi, D.; Gadiraju, U.; Gonçalves, J.; et al. REGROW: Reimagining global crowdsourcing for better human-AI collaboration. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Extended Abstracts, New Orleans, LA, USA, 29 April–5 May 2022; pp. 1–7. [Google Scholar]
- Santos, C.A.; Baldi, A.M.; de Assis Neto, F.R.; Barcellos, M.P. Essential elements, conceptual foundations and workflow design in crowd-powered projects. J. Inf. Sci. 2022. [Google Scholar] [CrossRef]
- Valentine, M.A.; Retelny, D.; To, A.; Rahmati, N.; Doshi, T.; Bernstein, M.S. Flash organizations: Crowdsourcing complex work by structuring crowds as organizations. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Denver, CO, USA, 6–11 May 2017; pp. 3523–3537. [Google Scholar]
- Kamar, E. Directions in hybrid intelligence: Complementing AI systems with human intelligence. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, New York, NY, USA, 9–15 July 2016; pp. 4070–4073. [Google Scholar]
- Tocchetti, A.; Corti, L.; Brambilla, M.; Celino, I. EXP-Crowd: A gamified crowdsourcing framework for explainability. Front. Artif. Intell. 2022, 5, 826499. [Google Scholar] [CrossRef] [PubMed]
- Barbosa, N.M.; Chen, M. Rehumanized crowdsourcing: A labeling framework addressing bias and ethics in machine learning. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Glasgow, Scotland, UK, 4–9 May 2019; pp. 1–12. [Google Scholar]
- Basker, T.; Tottler, D.; Sanguet, R.; Muffbur, J. Artificial intelligence and human learning: Improving analytic reasoning via crowdsourcing and structured analytic techniques. Comput. Educ. 2022, 3, 1003056. [Google Scholar]
- Mirbabaie, M.; Brendel, A.B.; Hofeditz, L. Ethics and AI in information systems research. Commun. Assoc. Inf. Syst. 2022, 50, 38. [Google Scholar] [CrossRef]
- Sundar, S.S. Rise of machine agency: A framework for studying the psychology of human–AI interaction (HAII). J. Comput. Commun. 2020, 25, 74–88. [Google Scholar] [CrossRef]
- Liu, B. In AI we trust? Effects of agency locus and transparency on uncertainty reduction in human–AI interaction. J. Comput. Commun. 2021, 26, 384–402. [Google Scholar] [CrossRef]
- Kang, H.; Lou, C. AI agency vs. human agency: Understanding human–AI interactions on TikTok and their implications for user engagement. J. Comput. Commun. 2022, 27, zmac014. [Google Scholar] [CrossRef]
- Daniel, F.; Kucherbaev, P.; Cappiello, C.; Benatallah, B.; Allahbakhsh, M. Quality control in crowdsourcing: A survey of quality attributes, assessment techniques, and assurance actions. ACM Comput. Surv. 2018, 51, 1–40. [Google Scholar] [CrossRef]
- Pedersen, J.; Kocsis, D.; Tripathi, A.; Tarrell, A.; Weerakoon, A.; Tahmasbi, N.; Xiong, J.; Deng, W.; Oh, O.; de Vreede, G.-J. Conceptual foundations of crowdsourcing: A review of IS research. In Proceedings of the 46th Hawaii International Conference on System Sciences, Wailea, HI, USA, 7–10 January 2013; pp. 579–588. [Google Scholar]
- Hansson, K.; Ludwig, T. Crowd dynamics: Conflicts, contradictions, and community in crowdsourcing. Comput. Support. Coop. Work. 2019, 28, 791–794. [Google Scholar] [CrossRef]
- Gimpel, H.; Graf-Seyfried, V.; Laubacher, R.; Meindl, O. Towards artificial intelligence augmenting facilitation: AI affordances in macro-task crowdsourcing. Group Decis. Negot. 2023, 1–50. [Google Scholar] [CrossRef]
- Wu, T.; Terry, M.; Cai, C.J. AI chains: Transparent and controllable human-AI interaction by chaining large language model prompts. In Proceedings of the CHI Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 29 April–5 May 2022. [Google Scholar]
- Kobayashi, M.; Wakabayashi, K.; Morishima, A. Human+AI crowd task assignment considering result quality requirements. In Proceedings of the Ninth AAAI Conference on Human Computation and Crowdsourcing, Virtual, 14–18 November 2021; pp. 97–107. [Google Scholar]
- Eggert, M.; Alberts, J. Frontiers of business intelligence and analytics 3.0: A taxonomy-based literature review and research agenda. Bus. Res. 2020, 13, 685–739. [Google Scholar] [CrossRef]
- Chan, J.; Chang, J.C.; Hope, T.; Shahaf, D.; Kittur, A. SOLVENT: A mixed initiative system for finding analogies between research papers. Proc. ACM Hum.-Comput. Interact. 2018, 2, 1–21. [Google Scholar] [CrossRef]
- Zhang, Y.; Shang, L.; Zong, R.; Wang, Z.; Kou, Z.; Wang, D. StreamCollab: A streaming crowd-AI collaborative system to smart urban infrastructure monitoring in social sensing. In Proceedings of the Ninth AAAI Conference on Human Computation and Crowdsourcing, Virtual, 14–18 November 2021; pp. 179–190. [Google Scholar]
- Yang, J.; Smirnova, A.; Yang, D.; Demartini, G.; Lu, Y.; Cudré-Mauroux, P. Scalpel-CD: Leveraging crowdsourcing and deep probabilistic modeling for debugging noisy training data. In Proceedings of the World Wide Web Conference, San Francisco, CA, USA, 13–17 May 2019; pp. 2158–2168. [Google Scholar]
- Schlagwein, D.; Cecez-Kecmanovic, D.; Hanckel, B. Ethical norms and issues in crowdsourcing practices: A Habermasian analysis. Inf. Syst. J. 2018, 29, 811–837. [Google Scholar] [CrossRef] [Green Version]
- Gadiraju, U.; Demartini, G.; Kawase, R.; Dietze, S. Crowd anatomy beyond the good and bad: Behavioral traces for crowd worker modeling and pre-selection. Comput. Support. Cooperative Work. 2018, 28, 815–841. [Google Scholar] [CrossRef]
- Palmer, M.S.; Huebner, S.E.; Willi, M.; Fortson, L.; Packer, C. Citizen science, computing, and conservation: How can “crowd AI” change the way we tackle large-scale ecological challenges? Hum. Comput. 2021, 8, 54–75. [Google Scholar] [CrossRef]
- Mannes, A. Governance, risk, and artificial intelligence. AI Mag. 2020, 41, 61–69. [Google Scholar] [CrossRef]
- Choung, H.; David, P.; Ross, A. Trust and ethics in AI. AI Soc. 2022, 1–13. [Google Scholar] [CrossRef]
- Zheng, Q.; Tang, Y.; Liu, Y.; Liu, W.; Huang, Y. UX research on conversational human-AI interaction: A literature review of the ACM Digital Library. In Proceedings of the CHI Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 29 April–5 May 2022. [Google Scholar]
- Heath, C.; Svensson, M.S.; Hindmarsh, J.; Luff, P.; Vom Lehn, D. Configuring awareness. Comput. Support. Coop. Work. 2002, 11, 317–347. [Google Scholar] [CrossRef]
- Park, J.; Krishna, R.; Khadpe, P.; Fei-Fei, L.; Bernstein, M. AI-based request augmentation to increase crowdsourcing participation. In Proceedings of the Seventh AAAI Conference on Human Computation and Crowdsourcing, Stevenson, WA, USA, 28–30 October 2019; pp. 115–124. [Google Scholar]
- Star, S.L.; Ruhleder, K. Steps towards an ecology of infrastructure: Complex problems in design and access for large-scale collaborative systems. In Proceedings of the ACM Conference on Computer Supported Cooperative Work, Chapel Hill, NC, USA, 22–26 October 1994; pp. 253–264. [Google Scholar]
- Mosconi, G.; Korn, M.; Reuter, C.; Tolmie, P.; Teli, M.; Pipek, V. From Facebook to the neighbourhood: Infrastructuring of hybrid community engagement. Comput. Support. Coop. Work (CSCW) 2017, 26, 959–1003. [Google Scholar] [CrossRef]
- Ehsan, U.; Liao, Q.V.; Muller, M.; Riedl, M.O.; Weisz, J.D. Expanding explainability: Towards social transparency in AI systems. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; pp. 1–19. [Google Scholar]
- Thieme, A.; Cutrell, E.; Morrison, C.; Taylor, A.; Sellen, A. Interpretability as a dynamic of human-AI interaction. Interactions 2020, 27, 40–45. [Google Scholar] [CrossRef]
- Walzner, D.D.; Fuegener, A.; Gupta, A. Managing AI advice in crowd decision-making. In Proceedings of the International Conference on Information Systems, Copenhagen, Denmark, 9–14 December 2022; p. 1315. [Google Scholar]
- Anjum, S.; Verma, A.; Dang, B.; Gurari, D. Exploring the use of deep learning with crowdsourcing to annotate images. Hum. Comput. 2021, 8, 76–106. [Google Scholar] [CrossRef]
- Trouille, L.; Lintott, C.J.; Fortson, L.F. Citizen science frontiers: Efficiency, engagement, and serendipitous discovery with human-machine systems. Proc. Natl. Acad. Sci. USA 2019, 116, 1902–1909. [Google Scholar] [CrossRef] [PubMed]
- Zhou, Z.; Yatani, K. Gesture-aware interactive machine teaching with in-situ object annotations. In Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology, Bend, OR, USA, 29 October–2 November 2022; pp. 1–14. [Google Scholar]
- Avdic, M.; Bødker, S.; Larsen-Ledet, I. Two cases for traces: A theoretical framing of mediated joint activity. Proc. ACM Hum.-Comput. Interact. 2021, 5, 1–28. [Google Scholar] [CrossRef]
- Tchernavskij, P.; Bødker, S. Entangled artifacts: The meeting between a volunteer-run citizen science project and a biodiversity data platform. In Proceedings of the Nordic Human-Computer Interaction Conference, Aarhus, Denmark, 8–12 October 2022; pp. 1–13. [Google Scholar]
- Rzeszotarski, J.M.; Kittur, A. Instrumenting the crowd: Using implicit behavioral measures to predict task performance. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, Santa Barbara, CA, USA, 16–19 October 2011; pp. 13–22. [Google Scholar]
- Newman, A.; McNamara, B.; Fosco, C.; Zhang, Y.B.; Sukhum, P.; Tancik, M.; Kim, N.W.; Bylinskii, Z. TurkEyes: A web-based toolbox for crowdsourcing attention data. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; pp. 1–13. [Google Scholar]
- Goyal, T.; McDonnell, T.; Kutlu, M.; Elsayed, T.; Lease, M. Your behavior signals your reliability: Modeling crowd behavioral traces to ensure quality relevance annotations. In Proceedings of the Sixth AAAI Conference on Human Computation and Crowdsourcing, Zürich, Switzerland, 5–8 July 2018; pp. 41–49. [Google Scholar]
- Hettiachchi, D.; Van Berkel, N.; Kostakos, V.; Goncalves, J. CrowdCog: A cognitive skill based system for heterogeneous task assignment and recommendation in crowdsourcing. Proc. ACM Hum.-Comput. Interact. 2020, 4, 1–22. [Google Scholar] [CrossRef]
- Zimmerman, J.; Oh, C.; Yildirim, N.; Kass, A.; Tung, T.; Forlizzi, J. UX designers pushing AI in the enterprise: A case for adaptive UIs. Interactions 2020, 28, 72–77. [Google Scholar] [CrossRef]
- Hettiachchi, D.; Kostakos, V.; Goncalves, J. A survey on task assignment in crowdsourcing. ACM Comput. Surv. 2022, 55, 1–35. [Google Scholar] [CrossRef]
- Pei, W.; Yang, Z.; Chen, M.; Yue, C. Quality control in crowdsourcing based on fine-grained behavioral features. Proc. ACM Hum.-Comput. Interact. 2021, 5, 1–28. [Google Scholar] [CrossRef]
- Bakici, T. Comparison of crowdsourcing platforms from social-psychological and motivational perspectives. Int. J. Inf. Manag. 2020, 54, 102121. [Google Scholar] [CrossRef]
- Truong, N.V.-Q.; Dinh, L.C.; Stein, S.; Tran-Thanh, L.; Jennings, N.R. Efficient and adaptive incentive selection for crowdsourcing contests. Appl. Intell. 2022, 1–31. [Google Scholar] [CrossRef]
- Correia, A.; Jameel, S.; Paredes, H.; Fonseca, B.; Schneider, D. Hybrid machine-crowd interaction for handling complexity: Steps toward a scaffolding design framework. In Macrotask Crowdsourcing—Engaging the Crowds to Address Complex Problems; Human-Computer Interaction Series; Springer: Cham, Switzerland, 2019; pp. 149–161. [Google Scholar]
- Sutherland, W.; Jarrahi, M.H.; Dunn, M.; Nelson, S.B. Work precarity and gig literacies in online freelancing. Work Employ. Soc. 2019, 34, 457–475. [Google Scholar] [CrossRef]
- Salminen, J.; Kamel, A.M.S.; Jung, S.-G.; Mustak, M.; Jansen, B.J. Fair compensation of crowdsourcing work: The problem of flat rates. Behav. Inf. Technol. 2022, 1–22. [Google Scholar] [CrossRef]
ID | Experimental Settings | Pre-Selection Mechanism(s) | Cost per HIT and Platform(s) | Time Allotted |
---|---|---|---|---|
P1 | 5-month-long deployment and testing with real users (n = 80 crowd workers) | - | $0.142 (Phase-1 deployment); $0.211 (Control Phase); MTurk; Hangoutsbot | ~10 min (per conversation) |
P2 | Ensemble method combining multiple results on individual frame segmentations and crowd-based propagated segmentation results (n = 70 crowd workers) | - | $0.90 (Segmentation); $0.15 (Scribble); MTurk | 142.6 s (per frame segmentation); 2.5 s (per method scribbles) |
P3 | 4-week testing (n = 17 participants), with an unspecified number of crowd workers | >95% assignment approval rate; Gold standard question sensor instances | ~$10/hour ($0.02 for each task performed on MTurk) | ~3 s (per labeled question sensor instance) |
P5 | Classification of potential studies for a systematic literature review (n = 147 crowd workers) | >70% overall accuracy; Worker screening based on two test questions | $10/hour; MTurk | - |
P6 | Purpose-mechanism annotation analogical search (n = 3 crowd workers per document), with an unspecified number of crowd workers | >=95% acceptance rate; Training step based on a gold standard example before the task execution | $30/hour (Upwork-worker 1); $20/hour (Upwork-worker 2); $10/hour ($0.70 for each task performed on MTurk) | 1.3 min (per document annotation); 4 min (overall task completion) |
P9 | Contextual bandit algorithm and agent deployment powered by AI-based request strategies for visual question answering, with an unspecified number of crowd workers | Training step using examples and a qualifying task | $12/hour; MTurk | - |
P13 | Performance evaluation of a crowd-AI hybrid framework through real-world datasets (n = 3 crowd workers per image in a crowd query) with an unspecified number of crowd workers | >95% overall task approval rate; >=1000 HITs completed | $0.20 for each worker per-image annotation; Labelme; MTurk | - |
P14 | A method for AI worker evaluation that uses a “divide-and-conquer” strategy for dynamic task assignment with an unspecified number of crowd workers | No strategies were deployed to target malicious workers | 240$ for 2 h of labor; MTurk | - |
P16 | Evaluation of hybrid crowd-algorithmic workflows for image annotation based on time completion and quality, with an unspecified number of crowd workers | >92% approval rate; >500 HITs completed | $9/hour ($0.20 for each task performed on MTurk) | 80 s (per HIT completion) |
P17 | Evaluation of crowd responses and computational performance in identifying damages from urban infrastructure imagery data (n = 2 to 5 crowd workers per query), with an unspecified number of crowd workers | >95% overall task approval rate; >=1000 HITs completed | $0.05 for each worker per image classification; MTurk | 0.0227 (average time taken to accomplish each streaming urban monitoring task using a hybrid crowd-AI model) |
P18 | Evaluation of model performance to re-query or not crowdsourced initializations for bounding-box annotations (n = 26 crowd workers located in the United States) | A gold standard for identifying inattentive workers; Annotators with more than 15% incorrect annotations were disregarded | ~$12/hour ($0.06 for each bounding-box annotation); MTurk | - |
P19 | Randomized online experiments comparing the performance of a computer vision model and a crowd of 15,016 individuals in tasks related to the detection of authentic vs. deepfake videos (n = 5524 participants: Experiment 1; n = 9492 participants: Experiment 2) | - | $7.28/hour plus bonus payments of 20% to the top participants; Experiment hosted on an external website (i.e., Detect Fakes); 304 participants recruited from Prolific | 15 min (per task completion) |
P20 | Performance evaluation of a dynamic optimal neural architecture searching framework that leverages crowdsourcing for handling disaster damage assessment problems with an unspecified number of crowd workers | >95% overall task approval rate; >=1000 HITs completed | $0.20 for each crowd worker per-image labeling; MTurk | 0.0198 s (average time with varying crowd query frequency); 0.0201 s (average time with varying numbers of crowd workers) |
P21 | Evaluation of a hybrid framework combining expert and crowd intelligence with explainable AI for misinformation detection (n = 3 crowd workers per HIT plus 5 experts), with an unspecified number of crowd workers | >=95% task acceptance rate | Unspecified amount above the minimum requirement on MTurk ($0.01 per assignment) | 61 s (average time of task completion); 21.4 h (total waiting time to collect and aggregate contributions from crowd workers) |
P25 | Development of a crowd-AI system for optimizing smart urban sensing applications (n = 3 to 7 crowd workers per task), with an unspecified number of crowd workers | >95% overall task approval rate; >=1000 HITs completed | $0.05 for each crowd worker per image classification; MTurk | - |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Correia, A.; Grover, A.; Schneider, D.; Pimentel, A.P.; Chaves, R.; de Almeida, M.A.; Fonseca, B. Designing for Hybrid Intelligence: A Taxonomy and Survey of Crowd-Machine Interaction. Appl. Sci. 2023, 13, 2198. https://doi.org/10.3390/app13042198
Correia A, Grover A, Schneider D, Pimentel AP, Chaves R, de Almeida MA, Fonseca B. Designing for Hybrid Intelligence: A Taxonomy and Survey of Crowd-Machine Interaction. Applied Sciences. 2023; 13(4):2198. https://doi.org/10.3390/app13042198
Chicago/Turabian StyleCorreia, António, Andrea Grover, Daniel Schneider, Ana Paula Pimentel, Ramon Chaves, Marcos Antonio de Almeida, and Benjamim Fonseca. 2023. "Designing for Hybrid Intelligence: A Taxonomy and Survey of Crowd-Machine Interaction" Applied Sciences 13, no. 4: 2198. https://doi.org/10.3390/app13042198
APA StyleCorreia, A., Grover, A., Schneider, D., Pimentel, A. P., Chaves, R., de Almeida, M. A., & Fonseca, B. (2023). Designing for Hybrid Intelligence: A Taxonomy and Survey of Crowd-Machine Interaction. Applied Sciences, 13(4), 2198. https://doi.org/10.3390/app13042198