Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Fire Risk Assessment of Urban Utility Tunnels Based on Improved Cloud Model and Evidence Theory
Previous Article in Journal
Protective Effect of Juglone (5-Hydroxy-1,4-naphthoquinone) against Iron- and Zinc-Induced Liver and Kidney Damage
Previous Article in Special Issue
Modeling the Impact of Liquid Polymers on Concrete Stability in Terms of a Slump and Compressive Strength
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Designing for Hybrid Intelligence: A Taxonomy and Survey of Crowd-Machine Interaction

by
António Correia
1,2,*,
Andrea Grover
2,
Daniel Schneider
3,4,
Ana Paula Pimentel
3,
Ramon Chaves
5,
Marcos Antonio de Almeida
5 and
Benjamim Fonseca
1
1
INESC TEC, University of Trás-os-Montes e Alto Douro, 5000-801 Vila Real, Portugal
2
College of Information Science & Technology, University of Nebraska at Omaha, Omaha, NE 68182, USA
3
Postgraduate Program in Informatics, PPGI/UFRJ, Rio de Janeiro 21941-916, Brazil
4
Tércio Pacitti Institute of Computer Applications and Research (NCE), Federal University of Rio de Janeiro, Rio de Janeiro 21941-916, Brazil
5
Systems Engineering and Computer Science Program (PESC/COPPE/UFRJ), Rio de Janeiro 21941-972, Brazil
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(4), 2198; https://doi.org/10.3390/app13042198
Submission received: 30 December 2022 / Revised: 29 January 2023 / Accepted: 2 February 2023 / Published: 8 February 2023
(This article belongs to the Special Issue Novel Hybrid Intelligence Techniques in Engineering)

Abstract

:
With the widespread availability and pervasiveness of artificial intelligence (AI) in many application areas across the globe, the role of crowdsourcing has seen an upsurge in terms of importance for scaling up data-driven algorithms in rapid cycles through a relatively low-cost distributed workforce or even on a volunteer basis. However, there is a lack of systematic and empirical examination of the interplay among the processes and activities combining crowd-machine hybrid interaction. To uncover the enduring aspects characterizing the human-centered AI design space when involving ensembles of crowds and algorithms and their symbiotic relations and requirements, a Computer-Supported Cooperative Work (CSCW) lens strongly rooted in the taxonomic tradition of conceptual scheme development is taken with the aim of aggregating and characterizing some of the main component entities in the burgeoning domain of hybrid crowd-AI centered systems. The goal of this article is thus to propose a theoretically grounded and empirically validated analytical framework for the study of crowd-machine interaction and its environment. Based on a scoping review and several cross-sectional analyses of research studies comprising hybrid forms of human interaction with AI systems and applications at a crowd scale, the available literature was distilled and incorporated into a unifying framework comprised of taxonomic units distributed across integration dimensions that range from the original time and space axes in which every collaborative activity take place to the main attributes that constitute a hybrid intelligence architecture. The upshot is that when turning to the challenges that are inherent in tasks requiring massive participation, novel properties can be obtained for a set of potential scenarios that go beyond the single experience of a human interacting with the technology to comprise a vast set of massive machine-crowd interactions.

1. Introduction and Context

Crowd-centered design is far from a trivial undertaking, and this is even more challenging when trying to implement hybrid intelligence models incorporating human cognition into algorithmic-crowdsourcing workflows [1]. In fact, crowd-algorithm interaction has recently reached a certain level of maturity, and a vast range of crowd-powered algorithms have successfully been applied in areas like medical image segmentation [2] and games with a purpose (GWAP) [3]. In these instances, crowds of untrained (non-expert) online workers have proved to provide similar results in terms of detection accuracy when compared to other groups such as domain knowledge experts, medical students, and experienced crowd workers. Further investigations in this burgeoning domain have also shown that the use of crowd-algorithm hybrids can outperform crowd-only techniques in accomplishing tasks like examining protein interactions and chemical reactions that are very common in the field of network biology [4]. Nonetheless, the taxonomic rationale behind the mass interaction efforts between crowds and machines as an integrated and complex socio-technical system is not completely understood, and there is a need to find novel ways of characterizing this body of work in its whole range. To mitigate this brittleness, a review of the main activities and contexts in which such crowd-AI ensembles have been investigated was carried out to develop a taxonomic scheme as comprehensive as possible to capture the nuances that are unique in comparison with other types of interactions between humans and computational systems.
For more than three decades, taxonomy development has been seen as a crucial part of socio-technical research within the field of CSCW [5]. To some extent, taxonomies provide a useful guide and theoretical foundation for assessing technological developments due to their capability to organize complex concepts and knowledge structures into understandable formats [6]. By going back in the course of time, one may find several taxonomic approaches that formed the basis for the understanding of the task types that are currently present in many crowdsourcing systems. For a review of prior taxonomic proposals, the reader is referred to Harris and co-authors [7]. In retrospect, McGrath [8] proposed a circumplex model of group tasks intended to characterize their nature (e.g., decision-making) into four quadrants that reflect the processes involved in their execution (i.e., generate, choose, negotiate, and execute). When moving even further back in history, Shaw [9] asserted the importance of aspects like task difficulty and intrinsic interest which are seen as foundational in several conceptual frameworks proposed to characterize the broader crowdsourcing phenomena (e.g., [10,11]). According to some authors, Johansen’s [12] time-space matrix is a landmark in the field of CSCW and inspired the development of descriptive models such as the Model of Coordinated Action (MoCA) [13], which frames each collaborative work arrangement on a continuum of synchronicity (synchronous vs. asynchronous), physical distribution, scale (i.e., number of participants), number of communities of practice involved, nascence and planned permanence of coordinated actions, and turnover. More recently, Renyi and colleagues [14] executed a set of data collection and processing procedures involving structured interviews in order to create a taxonomic scheme covering the components related to the collaboration technology support in home care work, while other authors have devoted most of their efforts to the design of innovative taxonomic interfaces [15]. In addition, there is now an emerging body of research documenting the different levels of hybrid intelligence in human-algorithm interactions.
From a more generic view, the concept of hybrid intelligence has been defined as the “combination of human and machine intelligence, augmenting human intellect and capabilities instead of replacing them and achieving goals that were unreachable by either humans or machines” [16]. Stemming from this definition, experiments have shown that the time is now appropriate to develop a new taxonomic proposal that can be used for planning and assessing activities among humans (crowds) and algorithms in a hybrid mode. To the best of the authors’ knowledge, no other previous work has specifically focused on crowd-AI interaction, although there are some research works addressing the particularities of hybrid human-AI intelligence at a taxonomic level. For example, Pescetelli [17] stressed the role of algorithms as assistants, peers, facilitators, and system-level operators. On the other hand, Dellermann and associates [18] characterized the design space of hybrid intelligence systems and recalled the importance of the task itself and its characteristics as a central aspect of collaboration among humans and machines. In the same vein, Dubey et al. [19] proposed a taxonomy of human-AI teaming comprised of task properties, trust-related aspects, teaming characteristics (e.g., shared awareness), and the learning paradigm involved. However, these taxonomies have hitherto not yet fully explored the particularities of hybrid crowd-AI systems and their use cases in real-world applications. Through a qualitative inspection of conceptual frameworks, artifacts, case studies, and empirical results comprising some type of human-AI hybrid interaction at a massive scale, this article’s contribution lies in systematically structuring a set of attributes and characteristics into an integrated taxonomy that arises as a continuum of co-evolving crowd-algorithmic partnerships intended to solve complex problems that neither humans nor machines can solve separately.
The article is set out as follows. After a discussion of background work in Section 2, a description of the methodological steps follows until the development of a taxonomy for hybrid crowd-AI systems is provided in Section 3. The resulting taxonomic framework is then presented and discussed in detail in Section 4, while Section 5 is concerned with the validation of the taxonomy proposed. Finally, possible extensions of this work are suggested in the Section 6 by looking toward the future of hybrid systems from a socio-technical view of human-centered systems design.

2. Background and Scope

The point of departure for building the taxonomy presented in this article was the existing work found on the intersectional space between human-computer interaction (HCI) and AI from a crowdsourcing perspective. Although the coining of the term ‘crowdsourcing’ took place in the mid-2000s, some may argue that its origin is rooted in the seminal work of the physicist and astronomer Denison Olmsted, who used news media as a crowdsourcing strategy for obtaining accurate observations on the Leonid meteor shower that was witnessed across the United States in 1833 [20]. What is interesting to note is that the sequential steps and general techniques used by Olmsted about nineteen decades ago constitute the basis for most of the current crowdsourcing applications. Aligned with this goal, a variety of taxonomies and conceptual frameworks have been developed to better characterize the way as information technology (IT)-enabled crowdsourcing operates. Among the known classifications of crowdsourcing activities, Corney and co-authors [21] were some of the first to frame this phenomenon from a taxonomic point of view by incorporating the nature of the crowd, the payment mechanisms or lack thereof, and the type of task into an integrated framework. In line with this, Rouse [22] proposed a taxonomy that comprises the different forms of intrinsic and extrinsic motivation that can lead to a successful crowdsourcing experience (e.g., social status, altruistic behavior, and personal achievement). This taxonomic proposal also addresses a set of aspects that are specific to the nature of the crowdsourcing task being undertaken by encompassing the expertise and complexity that are directly or indirectly involved in such initiatives. On the basis of insights from the history of group support systems, one would notice similar points to McGrath’s [8] task circumplex taxons taking into consideration the different task types that can be executed by individuals in a group structure, which may include decision-making, idea generation and information gathering to name just a few examples.
To an extent, this research strand led to the proliferation of several taxonomies incorporating task-related elements (e.g., [23,24,25,26,27,28,29,30]). Consistent with the task properties discussed in most of these studies, a cursory look at the literature reveals certain commonalities related to crowd attributes (e.g., reputation), requester features (e.g., incentivization), and platform facilities such as aggregation and payment mechanisms [29]. Other research works have focused specifically on internal forms of crowdsourcing [31] or even on the use of crowdsourcing as a taxonomy development strategy by itself [32]. On a more generic level, Modaresnezhad and colleagues [10] made a clear distinction between the IT-enabled crowdsourcing requirements in business and non-business contexts by basing their proposal on the four collective intelligence “genes” proposed by Malone et al. [23]. However, these taxonomies fail to fully account for the hybrid nature of crowd-AI interaction and thus are unable to capture the variety of interactions and relations that occur when using a hybrid intelligence system.
During the last few years, the advances in the development of AI technologies have been silently leveraging the capacity of a large pool of crowd workers worldwide who provide data on a daily basis and thus contribute to the improvement of several models on a scale that had never been seen before. In fact, this intertwinement of algorithms with crowdsourcing workflows brought important advantages in a multiplicity of settings. Prior work has employed these principles and proved to be effective in detecting accessibility problems on public surfaces (e.g., sidewalks) through the use of street-level imagery [33]. In the same vein, Zhang and associates [34] proposed a system for identifying urban infrastructure damages, such as fallen street signs, when AI-based solutions fail to recognize them. These architectures have also been applied in the context of video object segmentation [35], cultural heritage damage identification [36], endoscopic image annotation [37], and historical portrait identification [38]. In addition, weaving together crowd- and AI-powered techniques has also resulted in positive outcomes in real-time and remote on-demand assistance [39]. In the literature, there are also examples of sensing systems embedded in real-world environments (e.g., domestic spaces) that resort to built-in cameras and crowdsourcing interfaces for dynamic image labeling [40]. That is, crowd-AI hybrid systems are now able to engage humans and machines through a massively collaborative joint action that spans research fields and temporal and geographical boundaries [41]. Drawing from previous studies on the characterization of hybrid intelligence systems from a taxonomic viewpoint [18], the work conducted herein expands upon what has been previously investigated by examining the many facets of crowd-machine hybrid systems and thus identifying key thematic elements derived from the literature.

3. Methodological Approach

Drawing on a literature review of extant studies on human-AI interaction with a crowd-in-the-loop, this article outlines a particular set of arrangements in which the research on this burgeoning area can inform the development of future hybrid intelligence systems while contributing to understanding the socio-technical practices that require humans and machines working together towards a common goal. To this end, this work takes a human-centered AI approach [42] guided by the evidence-based taxonomy development method proposed by Nickerson and colleagues [6], as depicted in Figure 1. Synoptically, the practice of taxonomic classification can be described as a full-fledged endeavor in fields like astrophysics [43] and genetics [44] that usually consists of a formal semantic model with empirically or conceptually derived dimensions and characteristics that are exhaustive and mutually exclusive by nature [6]. At their structural level, taxonomies may have hierarchical or non-hierarchical configurations [45] and be constantly subjected to updating revisions [15]. Building on these methodological elements, the present study draws on the HCI body of literature to create a taxonomy of crowd-AI hybrids and thus aid researchers, practitioners, and anyone concerned with the understanding and development of these technologies. With this in mind, a step forward is made by distilling a variety and breadth of conceptual units from studies that seek to address the complementary way in which human crowds interact with AI systems. Essentially, this study sheds light on the socio-technical dimensions of crowd-AI integration by acknowledging that both social and technical aspects must be taken into account to understand the functioning of a hybrid system as a whole.
In this study, a novel set of heuristics and theoretical aspects are proposed as a foundational structure for future research based on a scoping review that follows the guidelines of evidence-based practice [46]. From a methodological perspective, this approach seeks to systematically categorize research into a classification scheme that is then used as a foundation for taxonomy construction and validation. To operationalize the taxonomic process, a phenetic approach [47] was used throughout a set of iterative cycles until the ending conditions were met. To this end, this article explores the vast space covered by the literature on hybrid crowd-AI systems grounded in case studies, ethnographic fieldwork, conceptual frameworks, surveys, semi-structured interviews, experimental work, mixed methods, and technical artifacts (e.g., algorithms). The taxonomy-building process followed the formal definition of Nickerson et al. [6] to create a taxonomy T with “a set of n dimensions Di (i = 1,…, n), each consisting of ki (ki ≥ 2) mutually exclusive and collectively exhaustive characteristics Cij (j = 1, …, ki) such that each object under consideration has one and only one Cij for each Di, or T = {Di, i = 1,…,n|Di = {Cij, j = 1,…, ki; ki ≥ 2}}”. It is worth noting that the guidelines provided by Nickerson and associates [6] represent one of the most well-established methodological approaches for taxonomy development in the field of information systems (IS), as reported in a recent literature review [48]. In this vein, these guidelines were systematically applied in an effort to make the proposed taxonomy clear, concise, robust, comprehensive, explanatory, and extendible as nearly as possible to attend to the conditions advocated by Gerber [49] when addressing the creation of classification artifacts.
The first phase of taxonomy development consisted of a descriptive literature analysis [50] to identify rationales for the use of crowd-AI hybrids. This was followed by a systematic examination of the insights extracted and further categorized into a literature classification scheme. In fact, this empirical-to-conceptual methodological approach has been a common procedure for data collection in the taxonomy-building activity (e.g., [51,52,53]), involving a set of systematic processes that range from a literature search to data filtering and classification. For taxonomic validation, a conventional approach for corpus construction was used as previously described in [54]. Essentially, the sample used in this study is an expanded version of that used in [41]. This was achieved by following a living systematic review protocol [55], where the search strategy is maintained and updated in a continuous manner as new studies become available. For the purpose of this review, a simplistic Boolean query formulation was applied using the following sequence of terms:
((“crowd*-AI” OR “AI-crowd*” OR “crowd*-machine” OR “machine-crowd*” OR “crowd*-computing”) AND (“interact*”))
This study expanded upon a previous corpus to accommodate a new set of possible settings in which crowd-AI interaction occurs. This was done due to two main reasons. First, a more recent picture of the state-of-the-art in this domain was needed. To this end, only papers published in the last five years (2018–2022) as of 17 December 2022 were inspected. Second, most of the studies considered for taxonomy validation in [41] comprised human-AI interaction at an individual level, while here, the focus is on evaluating arrangements involving crowds mixed with AI. The present work is also more restrictive in terms of peer-reviewed studies since this contribution only considered journal articles and conference papers. From a systematic search for publications indexed by the Dimensions database, which contains records from diverse digital libraries such as ACM Digital Library, IEEE Xplore, SpringerLink, and Science Direct with large coverage when compared to Web of Science and Scopus [56], content types such as adjunct/companion proceedings, panels, tutorials, book reviews, correspondence articles, introductions to special issues, doctoral colloquiums and student research competitions, keynote talks, commentaries, and course summaries were disregarded to ensure high-quality results. The search returned 593 publication records. After initial scrutiny of the titles and abstracts, along with the removal of papers that did not meet the inclusion criteria, a total of 138 studies were selected for further appraisal. To be eligible for inclusion, studies had to describe original research from primary or secondary literature addressing the broader domain of human-centered AI with a focus on crowd-AI interaction. As can be perceived from Figure 1, this selection resulted in 25 research studies published in English-written, peer-reviewed manuscripts (see Appendix A for details). The final set of papers chosen provided a reliable source of information for testing the taxonomic proposal since they presented a diverse set of scenarios.
As an integral part of the iterative taxonomy development process proposed in [6], the meta-characteristic of the taxonomy was determined to be its focus on functional properties and attributes of hybrid crowd-AI systems. Through a socio-technical lens grounded on the foundational aspects of crowd computation [57] and its embodiment in hybrid human-AI systems [58], the definition of this meta-characteristic allowed to frame and guide the taxonomy development process until the subjective ending conditions previously mentioned at the level of robustness, comprehensiveness, conciseness, extendibility, and explanatory nature of the taxonomy were fulfilled. Following the taxonomic work of Landolt and co-authors [59] on the use of deep neural networks in natural language processing (NLP) applications, this contribution also tried to meet objective ending conditions to ensure that each dimension and characteristic within the dimension were exclusive and no new characteristics or dimensions were added in the final iteration. Therefore, the original dimensions of the taxonomic proposal were validated within a literature matrix in order to verify whether these dimensions and characteristics are present in the final sample of studies addressing crowd-machine hybrid interaction. To some degree, the empirical validation of the taxonomy proposed here is inspired by the work of Straus [60], who took McGrath’s [8] group task circumplex as the object of evaluation.

4. ‘Inside the Matrix’: In Pursuit of a Taxonomy for Hybrid Crowd-AI Interaction

The availability of crowdsourcing platforms has led many organizations to adopt them as continuous and highly available sources of data upon which the paradigm of open innovation [61] is founded and continues to develop. On its most generic level, these solutions are leveraged by a 24/7 digital workforce and represent a problem-solving and innovation-driven approach able to shorten the entire product lifecycle [62]. As novel AI-infused products and features become more and more prevalent and integral to many everyday life pursuits, the need to incorporate hybrid intelligence in highly complex and volatile scenarios (e.g., early warning and prompt response) become even more evident since the complementarity [63] and adaptivity [64] of human and AI-based systems co-evolving over time “as coequal partners” [65] can be of particular value to suppress each other’s failures. In this vein, crowdsourcing has been applied to executing tasks such as obtaining ground-truth human labels [66], gathering ratings for data to be used in supervised machine learning [67], or even managing portfolio information [68]. In general terms, Kittur and associates [57] reported that crowd intelligence could be particularly useful in supervising, training, or even supplementing automation, while AI techniques can make the crowd more accurate while augmenting human capabilities and interactions through machine intelligence. This constitutes the point of departure for the proposal of a taxonomic framework for crowd-AI interaction, whose dimensions are shown in Figure 2 and briefly described in the following subsections.
From a taxonomy-building methodological standpoint, the taxonomic design approach was largely inspired by the Work System Theory as depicted by Alter [69] and further explored by Venumuddala and Kamath [70], who conducted an ethnographic work grounded on a set of observations retrieved in an AI research laboratory. In addition, some elements from the Activity Theory [71] inspired model for assessing CSCW in distributed settings [72] were also introduced. As a result, a previous human-centered AI framework [41] was revised and extended to highlight the importance of agency and control, explainability, fairness, common ground, and situational awareness in the design space of hybrid crowd-AI systems.

4.1. Temporal and Spatial Axes of Crowd-AI Systems

Crowdsourcing can be seen as a gateway to obtain reliable solutions to problems of varying levels of difficulty when there is an urgent need for quick and prompt action or even when the development of a game, big-scale application, software module, sketch, etc., is required without the strict rigidity to be situated physically close [73]. At the interaction level, hybrid crowd-AI systems can be able to support real-time crowdsourcing activities involving chatting and live tracking services, and also those occurring asynchronously, such as post-match soccer video analysis. In framing this discussion within the time-space matrix originally described in the context of groupware applications [12], this article concentrates on the spatio-temporal patterns of human-AI partnerships at a crowd scale. Thus, one can argue that the notion of space has been reshaped to incorporate the provision of localization and navigation information into crowdsourcing settings as a way of exploring the full potential of local-and-remote on-demand real-time response in tasks like road data acquisition [74] and local news reporting [75]. That is, crowd workers can be physically or virtually distributed in a dispersed or co-located manner or even “synchronize in both time and physical space” [76]. As some scholars noted, the level of engagement in both paid and non-profit crowdsourcing communities can also be evaluated, taking into account the daily-devoted time of participants, periodicity of interactions, and activity duration [77]. In this regard, the contribution time and availability of the crowd constitute key information sources in crowd-AI hybrid settings.

4.2. Crowd-Machine Hybrid Task Execution and Delegation

The rapid progress of AI-based technology has led to novel ways of motivating humans to delegate tasks to AI for further fulfillment. Bouwer [78] proposed a four-quadrant taxonomic model for AI-based task delegation and stressed the importance of emotional/affective states as key deterministic factors for task delegation. In line with this, Lubars and Tan [79] mentioned the relevance of trust, motivation, difficulty, and risk as influential determinants of human-AI delegation decisions. In particular, trust and reliance assume a special significance in terms of delegation preferences. The strategic line behind most of the tasks that are commonly crowdsourced in current digital labor platforms is still grounded in microtask design settings [80], although some recent attention has been given to macrotasking activities (e.g., creative work) which involve crowd-powered tools designed to support computer-hard tasks that need specialized expertise and thus cannot be executed by AI algorithms in an effective manner [81]. By focusing on the task properties and attributes in crowdsourcing, Nakatsu and co-workers [27] introduced a taxonomy that classifies the structure (well-structured vs. unstructured) and level of interdependence (independent vs. interdependent) together with a third binary dimension involving the degree of commitment (low vs. high) required to accomplish a task.
Going back to the levels of complexity that may be present in crowdsourcing tasks, Hosseini et al. [29] briefly divided them into two main categories: simple and complex. Using this rationale, microtasks have been largely described as being simple for crowd workers to perform well and easily in the sense that they involve a lesser degree of context dependence [82]. Furthermore, these self-contained tasks are usually short by nature and take little time to finish. Zulfiqar and co-authors [83] go even further by underlining that microtasks do not require specialized skills, which enable any worker to contribute in a rapid and cognitive effortless manner. Extrapolating to more complex crowdsourcing processes, many forms of advanced crowd work have emerged throughout the years, and there is now a renewed focus on task assignment optimization involving algorithmically-supported teams of crowd workers acting collaboratively [84,85]. While the possibilities for optimization are manifold across a number of different task scenarios, robust forms of hybrid crowd-machine task allocation and delegation are needed to yield accurate results and reliable outcomes not only for crowd workers acting at the individual level but also in terms of team composition and related performance.

4.3. Contextual Factors and Situational Characteristics in Crowd-Computing Arrangements

Any crowd-machine hybrid interaction has its own contextual characteristics and specificities. Dwelling on this issue, one may wish to claim that crowdsourcing settings are highly context-dependent and situational information is particularly critical to achieving successful interactions in a crowd-AI working environment since a crowd can be affected by contextual factors such as geo-location, temporal availability, and surrounding devices [86]. Considering the context from which a crowd worker is interacting with an intelligent system can help to personalize the way the actions are developed and thus improve processes, such as task assignment [87] while providing resources and contextually relevant information tailored to the needs of each individual based on content usage behaviors [42] and other forms of context extraction. This involves a set of environmental, social, and cultural contexts [88] that come with fundamental challenges for hybrid algorithmic-crowdsourcing applications in terms of infrastructural support for achieving efficient and accurate context detection and interpretation. When designing a crowd-AI hybrid system, user-generated inputs must be handled adequately in order to filter the relevant information and better adapt the interaction elements and styles to each particular case [89]. In hindsight, this is also somewhat related to the notions of explainability and trust in AI systems [90] since the trustworthy nature of these interactions will be affected by the quality of the contextual information provided and the degree to which a user perceives the AI system they are interacting with as useful for aiding their activities. In such scenarios, aspects like satisfaction shape the internal states of the actors [72] and can constrain the general performance of the crowd-AI partnerships if the system does not meet the expectations of the users.

4.4. Deconstructing the Crowd Behavior Continuum in Hybrid Crowd-Machine Supported Environments

To some extent, both paid and non-paid forms of crowdsourcing have served as “Petri dishes” for many behavioral studies involving experimental work [91]. A crowd can differ in terms of attention level, size, emotional state, motivation and preferences, and expertise/skills, among many other characteristics [86]. In this vein, Robert and Romero [92] found a considerable impact of diversity and crowd size on performance outcomes while testing the registered users of a WikiProject Film community. As such, online crowd behaviors are volatile by nature and vary given the contextual factors and situational complexity of the work, along with the surrounding environment of its members. Neale and co-authors [72] briefly explained the importance of context for creating a common ground which can be understood as the shared awareness among actors in their joint activities, including their mutual knowledge. That is, sustaining an appropriate shared understanding can constitute a critical success factor for achieving a successful interaction when designing intelligent systems [93]. This also applies to the range of crowd work activities that involve self-organized behaviors and transient identities [94], which imply a reinforced need for effective quality control mechanisms (e.g., gold standard questions) in crowd-AI settings [40]. Furthermore, some crowds are arbitrary, while others are socially networked or organized into teams that coalesce and dissolve in response to an open call for solutions where the nature of the task being crowdsourced is largely dependent on collective actions instead of individual effort only. In some specific cases, these tasks are non-decomposable and involve a shared context, mutual dependencies, changing requirements, and expert skills [95,96]. In this vein, some prior research has revealed the presence of “a rich network of collaboration” [97] through which the crowd constituents are connected and interact in a social manner, although there are many concerns about the bias introduced by these social ties. Seen from a human-machine teaming perspective, imbalanced crowd engagement [98], conflict management [99], and lack of common ground [100] are also key aspects that must be taken into account in such arrangements.

4.5. Hybrid Intelligence Systems at a Crowd Scale: An Infrastructural Viewpoint

As AI-infused systems thrive and expand, crowdsourcing platforms continue to play an active role in aggregating inputs that are used by companies and other requesters around the globe toward the ultimate goal of enabling algorithms with the ability to cope with complex problems that neither humans nor machines can solve alone [101]. However, designing for AI with a crowd-in-the-loop includes a set of infrastructure-level elements such as data objects, software elements, and functions that together must provide effective support for actions like assigning tasks, stating rewards, setting time periods, providing feedback, evaluating crowd workers, selecting the best submissions, and aggregating results [102]. To realize the full potential of these systems, online algorithms can be incorporated into task assignment optimization processes for different types of problems involving simple (decomposable), complex (non-decomposable), and well-structured tasks [85]. By showing reasonable results in terms of effectiveness, some algorithms have been proposed to organize teams of crowd workers as cooperative units able to perform joint activities and accomplish tasks of varying complexity [95,96,103]. From an infrastructural perspective fitted into the taxonomy proposed in this article, the contribution of this study builds on Kamar’s [104] work to stress the importance of combining both human and machine capabilities in a co-evolving synergistic way.
Taken together, crowd and machine intelligence can offer a lot of opportunities for predicting future events while improving large-scale decision-making since online algorithms can learn from crowd behavior using different integration and coupling levels. In many settings, hybrid intelligence systems can help to draw novel conclusions by interpreting complex patterns in highly dynamic scenarios. In line with this, many have studied novel forms of incorporating explainable AI approaches, such as gamification [105], for enhancing human perceptions and interpretations of algorithmic decisions in a more transparent and understandable manner. Due to their scalability, crowd-AI architectures can constitute an effective instrument for handling complexity, and thus more research is needed to explore how to best develop hybrid crowd-AI-centered systems taking into account the requirements and personal needs of each crowd worker. In particular, this domain raises some questions about the use of AI to enhance the quality of crowdsourcing outputs through high-quality training data [67] and related interaction experiences, as seen from a human-centered design perspective [106]. To summarize, crowd-powered systems can present a wide variety of opportunities to train algorithms “in situ” [107] while providing learning mechanisms and configuration features for customizing the levels of automation over time.

4.6. ‘Rebuilding from the Ruins’: Hybrid Crowd-Artificial Intelligence and Its Social-Ethical Caveats

There is a clarion call for an investigation on the ethical, privacy, and trust aspects of human-AI interaction from several causes. For instance, Amershi and colleagues [88] raised a set of concerns related to the need to avoid social biases and detrimental behaviors. To tackle those issues, it is necessary to dive deep into the harms provided by AI decisions in a contextualized way to ensure fairness, transparency, and accountability in such interactions [108]. This can be realized by materializing human agency and other strategies that can provide more control over machine behaviors [109,110,111]. From diversity to inclusiveness—and subsequently justice—there is still a long way until these goals are accomplished within the dynamic frame of human-AI interaction and hybrid intelligence augmentation. To address these shortcomings, system developers can play a critical role by considering the potential effects of AI-infused tools on user experiences.
Extrapolating to the crowdsourcing settings, Daniel and co-workers [112] reported a concern with the ethical conditions, terms, and standards aligned with the compliance towards regulations and laws that are sometimes overlooked in such arrangements. When considering crowd work regulation, aspects of intellectual property, privacy, and confidentiality in terms of participant identities constitute pivotal points [113]. A look into previous works (e.g., [114]) shows multiple concerns regarding worker rights, ambiguous task descriptions, acknowledgment of crowd contributions, licensing and consent, low wages, and unjustified rejected work. Such ethical and legal issues are even more expressive in the context of hybrid crowd-AI systems where there are not only online experiments and other human intelligence tasks (HITs) running on crowdsourcing platforms but also machine-in-the-loop processes within the entire hybrid workflow. In a particular setting, strategies like shared decision-making and informed consent can be particularly helpful to mitigate the threats of bad conduct and malicious work if based on a governance strategy where the guidelines, rules, actions, and policies are socially organized by the crowd itself [115]. In this vein, the potential impacts of the aforementioned socio-ethical concerns surrounding crowd-powered hybrid intelligence systems must be further elucidated and investigated from several lenses to draw a realistic picture of the current situation.

5. Validation and Assessment of the Proposed Taxonomy

This study proposes a taxonomic framework aimed at accommodating a diverse set of infrastructurally supported crowd-algorithm interactions that occur in a certain time and space within two separate orders of intelligence, which, therefore, can be combined in a hybrid model architecture. The interactions occurring in this hybrid space have a set of unique contextual and situational aspects and must be guided by ethical guidelines, rules, and principles in order to combine crowd and machine workflows effectively and transparently. To validate the proposed taxonomy and demonstrate its utility, this contribution examined the applicability of the taxonomy in a total of twenty-five studies presenting some type of crowd-machine interaction. This is in line with the need for a methodologically rigorous inspection of the possible effects of hybrid intelligence in practical settings. For instance, substantial literature on human-AI interaction has developed quickly across different areas [116], but few attempts have been made to gather evidence about this intersectional space at a crowd scale and thus understand the uses and limitations of hybrid crowd-AI systems from a socio-technical design viewpoint. The results of the taxonomy-based review are provided in Figure 3, accompanied by an example of a scheme used to explain the rationale behind the taxonomic classification (Figure 4). Further details regarding the 14 journal articles and 11 conference papers selected for taxonomy-based literature analysis are given in Table A1 and Table A2 in the Appendix A. In order to determine whether each category of the taxonomy was either present or absent, the following levels were considered:
Fully addressed: The manuscript clearly emphasizes the specific elements underlying the taxonomic category by addressing one or more of its unique attributes, with a potential experiment, solution, or case study demonstrating applicability. For instance, Mohanty and co-authors [38] make explicit reference to the contextual information (e.g., biographical details) provided to the user about each portrait in Photo Sleuth, a crowd-AI-enabled face recognition platform where a crowd of both expert and non-expert volunteers can tag a picture using this supplementary piece of contextual data to aid the decision process.
Not addressed: The work does not directly address any of the aspects that are inherent to the category under consideration.
Partially addressed: The study provides details that can be used to address the particular taxonomic category, even if not explicitly mentioned in the manuscript. By way of example, Kobayashi et al. [117] do not directly provide details about the contextual information required in the natural disaster response setting used for demonstrating the proposed method, but the situational awareness and subsequent timely information required to manage the rapidly evolving scenarios toward well-informed and up-to-date decision-making are implicitly stated.
On the basis of insights from previous analytical work, this taxonomically grounded literature review process has been adopted in areas like business intelligence and analytics [118] as a way of iteratively developing and refining taxonomic dimensions and characteristics while pinpointing areas requiring further investigation.
As can be seen from Figure 3, the taxonomy presented in this article is far from comprehensive enough to accommodate all types of possible scenarios involving crowd-AI interaction. Instead, the goal is to facilitate a cohesive understanding as a basis for further scrutiny of crowd-computing hybrids in real-world applicative contexts. Note that there are some categories that can co-exist, taking into account the specificity of each situation or use case. As such, the first taxonomic unit contains the spatio-temporal elements (T1) that frame crowd-AI interaction in relation to the original time-space matrix proposed by Johansen [12]. In brief terms, this classification model categorizes interactions as follows: same place/time, different places/same time, same place/different times, or different places/different times. To a broad extent, crowd-AI interactions can occur in asynchronous or real-time settings where the individuals that constitute the crowd can be physically and virtually co-located or geographically dispersed (remote). In addition, the worker location and task duration time [11] were also considered, as the latter is intimately connected to the time frame or limit that is set to complete a task. In the example provided in Figure 4, a nearly real-time on-demand crowd-powered system is proposed to collect responses from crowd workers that can be at any location but need to be available to provide contributions in real-time due to the quickly changing contextual requirements underlying the type of tasks performed. Looking at the results of the taxonomy-based literature review in detail, a total of 84% (n = 21) of included papers have reported temporal and/or spatial aspects of crowd activity. As a brief example, Chan and colleagues [119] introduced a mixed-initiative system with an annotation time of 1 min per paper on average in analogy matching tasks. In terms of real-time crowd-AI settings, some primary studies (e.g., [36,39,40,98,120]) presented synchronous interactions between crowd members, although most of the crowdsourcing systems relied on an asynchronous model.
Consistent with the previous literature, the most addressed taxonomic unit is related to task design, assignment, and execution (T2), with a total of 25 primary studies. In crowdsourcing experiments, task design is seen as a cornerstone to achieving the goals of a project or campaign since the characteristics and configuration of crowdsourced tasks influence the general outcomes obtained from the crowd [91]. In general, different types of tasks were found in the selected sample. As mentioned before, tasks differ both in terms of attributes, complexity, and granularity [11]. For instance, Scalpel-CD [121] generates label inspection microtasks in a dynamic way, while Evorus [39] focuses on classification tasks in the form of voting. A slightly different task specification is employed in Photo Sleuth [38], where crowd workers are invited to perform person identification/recognition tasks that are therefore augmented with visual tags to allow portrait seeking. Moreover, CollabLearn [36] is based on crowd query tasks where human processing is needed to highlight damaged areas from cultural heritage imagery. A somewhat related body of work (e.g., [34]) has sought to support the execution of crowd-in-the-loop interactive image labeling tasks with the ultimate goal of enhancing AI-powered damage scene assessment algorithms. All in all, the task-related aspects discussed in the growing literature on the interplay between crowdsourcing and AI systems have been playing an indispensable role in explaining complex relationships among crowd inputs and further integration into hybrid workflows.
Extrapolating to the ethical principles and standards in crowd-AI settings (T3), the review only identified nine papers (36%) that explicitly discuss ethical behaviors from a requester-, crowd- or even AI-centered standpoint. Despite the recognized need for fair payment and long-term career building in online crowd work platforms [122], this study shows that the ethical concerns underlying the interaction-centric crowd-AI activity are often overlooked from a practical perspective, despite some examples of strategies presented in the crowdsourcing literature such as ensuring fair compensation by paying crowd workers in conformity with the complexity of the task being performed [123]. Based on the findings from the chosen sample, Palmer and co-authors [124] provide one of the few examples of studies calling attention to possible unethical actions associated with the disclosure of sensitive information from images and videos. In a similar way, only 20% of primary studies (n = 5) fully describe machine and human (crowd) agency, governance practices, or control (oversight) (T4), although extensive research has been conducted about the potential risks and unintentional harms associated with the lack of an effective governance strategy able to regulate algorithmic actions [125]. In this regard, trust building [126,127] appears among the most critical factors affecting technology acceptance when considering human-AI interaction at a massive scale.
One enduring taxonomic unit that has been largely addressed since the very beginning of the field of CSCW is concerned with the contextual and situational information (T5) that is then used to support awareness about the environment in which the interaction takes place. This includes what goes on in the environment, who is available, who leaves, and how individuals “remain sensitive to the conduct of others so that an event or action, which may have some passing significance, can be displayed to each other without it necessarily gaining interactional or sequential import” [128]. If the entire sample is considered, 48% of studies (n = 12) mentioned some kind of contextual or situational issues. For instance, Huang et al. [39] proposed a crowd-machine hybrid system where the conversation context is used to provide response candidates using recorded facets and previous chat conversation logs. In particular, the task-specific contextual data is captured with the help of the crowd (by using chat logs) to improve the quality of responses based on current and past conversations. Moreover, Park and associates [129] used self-adapting mechanisms based on reinforcement learning (RL) and contextual features extracted to increase crowdsourcing participation over time, while Guo and co-workers [40] considered the lack of context as a determining factor for failure in smart environments.
Turning to the role of infrastructural support (T6) in interactive human-AI practices at a crowd level, the review disclosed a total of 20 studies (80%) where infrastructure or the characteristics of a crowd-computing platform are reported. In CSCW, the concept of ‘infrastructure’ and its ecological nature [130] has developed over the years to characterize socio-technical assemblages “that underpins and enables action, engagement, and awareness” [131]. On the basis of their research review, Hosseini and colleagues [29] gave a detailed description of the features that are commonly found in crowdsourcing platforms. In line with this, Santos and co-authors [102] stressed that a crowdsourcing system must provide functions and components able to support workflows involving actions such as task assignment, pre-selecting crowd workers, stating rewards, and selecting contributions. From payment mechanisms to result aggregation, a crowd-computing platform must combine crowd-, requester-, task- and platform-related information and facilities (i.e., infrastructural elements) that act in unison to carry out tasks in accordance with the different requirements. From an infrastructural perspective, Huang and associates [39] described the conversational worker interface used for chatting and real-time response modeling along with the automatic response voting and generating algorithms deployed to operate in a continuous manner as the conversation continues. Using a crowd-AI hybrid intelligence lens, the results showed a total of 14 studies addressing algorithmic reasoning, inference, explainability, and interpretability (T7). For instance, human-AI decision-making processes are complex by nature, and AI-infused systems require a certain level of explainability [132] and interpretability [133] to provide insights about the algorithmic actions taken during the AI-enabled experience. However, several studies agree that these explanations must manifestly be comprehensible, transparent, and actionable (i.e., how humans use or find the explanations useful) to ensure traceability and trust in AI-advised crowd decision-making [134]. Moreover, incorporating reasoning capabilities into hybrid intelligence systems at a massive scale can provide support for better decisions since RL and related algorithms can learn from crowd behavior [104] while offering a lot of possibilities to improve decision-making at a large scale.
This points to the notions of scalability and adaptability (T8) and their importance in highly dynamic and unpredictable environments. Due to their flexibility, hybrid crowd-algorithm methods represent a means of handling complexity and gathering high-quality training data. From the entire sample, 17 studies (68%) addressed scalability and/or crowd-AI adaptability. As an example, Anjum et al. [135] stressed the value of scalable image annotation, while Trouille and co-authors [136] have drawn attention to scalable application programming interfaces with the ability to quickly configure a citizen science campaign. A further focus of the taxonomic-based review presented here is on the learning and training processes (T9) behind the current AI models. In crowd-machine settings, humans may “feed” the algorithm to act in situ in an automatic fashion based on data inputs that can work as training samples [137]. On this point, 96% of included studies (n = 24) addressed aspects related to this taxonomic unit. For instance, Kaspar and colleagues [35] proposed a crowd-AI hybrid workflow in which the training data is generated through video segmentation. Further expanding the scope, a related important question is how to train the crowd itself when an AI output is used [117]. Accordingly, Zhang and associates [36,120] call for more research into aspects like AI bias mitigation and the detection of imperfect or biased inputs from the crowd as factors that may compromise the system’s reliability. A look at the work conducted by Huang et al. [39] denotes that the machine learning model that works behind the conversational assistant proposed is fed with training data from past up/down votes given by crowd workers. This continuous learning approach allows optimization of the entire automatic voting process based on the assessment of the quality of the human responses.
Stemming from the literature of social and behavioral sciences, the extraction of behavior features from crowd activity (T10) has been particularly relevant to unravel the complexities of crowdsourcing practice and improving the synergistic interaction between humans (crowds) and algorithms. However, the results from this scoping review show that only 40 percent of the literature sample (n = 10) focused on aspects of crowd activity from a behavioral standpoint. Building on the collective intelligence genome [23], the understanding of what, why, who, how, and the circumstances under which such interaction takes place can be enhanced through the behavioral analysis of traces of past activity [138,139]. In hybrid crowd-algorithm interactive settings, user activity tracking involving keystroke, eye tracking, time duration, and mouse click recording (e.g., window resizing) can contribute to the cognitive, physical, and perceptual augmentation of the crowd with practical implications for improving task assignment, performance estimation, and worker pre-selection and/or recommendation based on reliability measures [140,141,142,143]. From a behavioral point of view, identifying active workers can play a critical role in systems such as Evorus [39] since the model strongly depends on human inputs, while capturing crowd members’ meta-information is important to personalize the experience to the user in more intelligent ways. Although the development of AI systems supported by online interfaces able to log user actions has a great capacity to conduct behavior analysis [144], recent research works (e.g., [145]) have shown that there are a lot of resources required to realize the effective capture of these behavioral traces from an infrastructural lens.
A closely related line of investigation involves the quality control mechanisms (T11) that are used in crowdsourcing systems to reduce the occurrence of inaccuracies and biased inputs provided by malicious (or poorly motivated) crowd workers. Empirically, this work shows that there were only five papers (20%) that did not explicitly report strategies for ensuring quality control and modeling crowd bias. In general terms, quality control strategies for detecting low-quality work can vary from input and output agreement to majority voting/consensus, ground truth (e.g., gold standard questions), contributor evaluation, expert review, real-time support, or even fine-grained behavioral traces [146]. Yet, as pointed out by Daniel and co-authors [112] and further developed by Jin et al. [86], a quality assessment process can be performed computationally (e.g., task execution log analysis), collaboratively (e.g., peer review), or even individually (e.g., qualification test). Regarding the latter, worker pre-selection has been used by requesters as a common approach to filter unqualified workers by taking into consideration factors like reputation and credentials. In the example of the scenario shown in Figure 4, the system has a high error tolerance for imperfect automated actions from voting algorithms and chatbots since the oversight is done by the (human) crowd.
Throughout the last decades, several scholars have stressed the importance of motivational factors (T12) as a quality assurance determinant and also a catalyst for sustained participation in crowdsourcing [147]. Briefly, the taxonomy-based review identified 20 primary studies (80%) addressing motivation and incentive mechanisms regarding the use of algorithmic systems powered by crowdsourcing techniques. This includes extrinsic incentives (e.g., immediate payoffs) and also intrinsic (hedonic) motives like inherent satisfaction and entertainment [112]. For example, Evorus [39] provides a continuously updated scoreboard that displays the reward points given to each crowd worker according to his/her performance on a particular task, where the value is automatically converted into a monetary bonus. As Truong et al. [148] have noted, crowdsourcing contests are also considered intuitive ways for incentivizing crowd workers and are frequently used in macrotask crowdsourcing for solving problems with an elevated degree of complexity [81,149]. In general terms, the incentives reported in the literature range from monetary rewards to gifts and gamification strategies [112]. Concerning the former, the review presented here also provides a summary of the primary studies from the sample that presented experimental work based on monetary rewards. As Table 1 depicts, 60% of the papers included in the taxonomy-based literature review (n = 15) have reported paid experiments in remote settings. For paid crowdsourcing experiments where the crowd had to execute the whole experiment remotely, this part of the analysis considered the time allotted, pre-selection mechanism(s), crowd size, platform(s) used, and reward in terms of cost per HIT in US Dollars ($). This is in line with previous studies (e.g., [91]) reporting aspects related to the several stages of experimental design in crowdsourcing settings.
Regarding the filtering mechanisms used for early pre-selection of crowd workers, the review of the literature showed five studies where the HIT acceptance rate was set to more than 95%. Moreover, this contribution also identified four studies where the number of tasks completed by a potential crowd worker had to be at least 1000. From this scoping review, a total of five experiments involved some type of ground truth in the form of a gold standard or test question. The selected sample also contained cases in which no pre-selection strategies were applied, while one of the experiments disregarded crowd workers with more than 15 percent of incorrect answers. It is also worth noting that one of the primary studies contained workers located in the United States only. Taken all together, the utilization of these pre-selection techniques can be useful to specify the characteristics of potential contributors improve the likelihood that only skilled, high performing, and/or trustworthy crowd workers are allowed to participate. When considering the platforms used to recruit participants, the results show a clear preference for the use of MTurk (n = 14). Although some tasks were paid up to $0.20, some workers only received $0.05 per task performed. Going back to the payment imbalances and unfair compensation that challenge ethical norms in crowdsourcing marketplaces [150,151], a lens into the literature has revealed that there is an increasing awareness of the crowd worker’s conditions and that the monetary compensation must be set in a fair manner when adopting crowdsourcing for tasks such as data collection and analysis. Overall, this study also revealed different average times of HIT completion in accordance with the complexity and requirements of each task, while a remarkable number of primary studies (n = 10) did not mention the total number of crowd workers involved in the experiment. Nonetheless, some studies involve both crowd workers and experts in their experimental settings, with a crowd size ranging from 2 to 7 crowd workers per task and a maximum size of 147 paid online workers in a single experiment.

6. Concluding Discussion and Challenges Ahead

Owing to the difficulty in handling problems of increasing complexity involving noisy and complex data streams, hybrid crowd-machine interactive workflows have been implemented to efficiently scale training data and parameter models in order to produce insights and support decision-making processes in a way that was not possible using conventional methods. In various problem domains, new patterns can be identified from complex decision rules for further verification in a human-in-the-loop basis encapsulated in crowd-AI systems and architectures able to support tasks like content regulation and medical diagnosis. Considering the latter, machine learning skills are now increasingly crowdsourced in the form of contests or competitions running on predictive modeling and analytics services where both monetary and non-monetary incentives are used to aggregate crowd knowledge and thus help to better streamline the early detection and treatment processes that are critical in healthcare settings. However, building trust in crowd-machine interaction while making AI more efficient and adaptable are among the prevalent challenges in crowdsourcing and are usually seen as hindering factors for the successful adoption and use of these systems in practice.
In this study, an initial taxonomy of crowd-AI hybrid interaction was proposed as a guiding framework for system developers, public and private health professionals, scientists, and other stakeholders worldwide interested in this emerging area. Despite the contribution towards a comprehensive scheme to explain how crowd-machine hybrid interaction has been addressed in various scenarios presented in the literature, this article constitutes only one piece of a much larger puzzle. In other words, the information obtained from work presented here is considered a basis for further expansions and testing scenarios in real-world contexts in the form of continuous observation of the co-evolving relations between humans and algorithms with the goal of informing the design of intelligent systems adequately and cohesively. Framing a territory in constant expansion like crowd-AI hybrids is a challenging task. Overall, the taxonomy-based review found a gap in terms of understanding, both empirically and conceptually, the role of ethical principles and perceived fairness in building and deploying AI responsibly and with adequate governance strategies. This study also shows that more experimentation and additional investigative steps will be needed to cope with inconsistent records from crowd workers. Moreover, there are also a number of directions for future work that should be beneficial to extend in the near future for new types of research practices involving crowd-computing hybrids so that scientific institutions, companies, and the general public can all benefit from the knowledge generated from this convergence and therefore better respond to the volatile nature and changing demands of the current environments.

Author Contributions

Conceptualization, methodology, formal analysis & writing–original draft, A.C.; supervision, A.G.; writing–review and editing, D.S.; investigation, A.P.P.; investigation, R.C.; investigation, M.A.d.A.; supervision & validation, B.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research has been sponsored by National Funds through FLAD—Luso-American Development Foundation and FCT—Portuguese Foundation for Science and Technology. The work of António Correia is supported by FCT grant SFRH/BD/136211/2018.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare that there are no conflict of interest or competing financial interests related to this manuscript.

Appendix A

Table A1. List of primary studies included in the taxonomic validation process.
Table A1. List of primary studies included in the taxonomic validation process.
IDAuthor(s)YearTitle
P1Huang et al.2018Evorus: A crowd-powered conversational assistant built to automate itself over time
P2Kaspar et al.2018Crowd-guided ensembles: How can we choreograph crowd workers for video segmentation?
P3Guo et al.2018Crowd-AI camera sensing in the real world
P4Nushi et al.2018Towards accountable AI: Hybrid human-machine analyses for characterizing system failure
P5Krivosheev et al.2018Combining crowd and machines for multi-predicate item screening
P6Chan et al.2018SOLVENT: A mixed initiative system for finding analogies between research papers
P7Yang et al.2019Scalpel-CD: Leveraging crowdsourcing and deep probabilistic modeling for debugging noisy training data
P8Trouille et al.2019Citizen science frontiers: Efficiency, engagement, and serendipitous discovery with human-machine systems
P9Park et al.2019AI-based request augmentation to increase crowdsourcing participation
P10Kittur et al.2019Scaling up analogical innovation with crowds and AI
P11Mohanty et al.2020Photo Sleuth: Identifying historical portraits with face recognition and crowdsourced human expertise
P12Zhang et al.2020Crowd-assisted disaster scene assessment with human-AI interactive attention
P13Zhang et al.2021CollabLearn: An uncertainty-aware crowd-AI collaboration system for cultural heritage damage assessment
P14Kobayashi et al.2021Human+AI crowd task assignment considering result quality requirements
P15Palmer et al.2021Citizen science, computing, and conservation: How can “Crowd AI” change the way we tackle large-scale ecological challenges?
P16Anjum et al.2021Exploring the use of deep learning with crowdsourcing to annotate images
P17Zhang et al.2021StreamCollab: A streaming crowd-AI collaborative system to smart urban infrastructure monitoring in social sensing
P18Lemmer et al.2021Crowdsourcing more effective initializations for single-target trackers through automatic re-querying
P19Groh et al.2022Deepfake detection by human crowds, machines, and machine-informed crowds
P20Zhang et al.2022On streaming disaster damage assessment in social sensing: A crowd-driven dynamic neural architecture searching approach
P21Kou et al.2022Crowd, expert & AI: A human-AI interactive approach towards natural language explanation based COVID-19 misinformation detection
P22Guo et al.2022CrowdHMT: Crowd intelligence with the deep fusion of human, machine, and IoT
P23Wang et al.2022Graph optimized data offloading for crowd-AI hybrid urban tracking in intelligent transportation systems
P24Gal et al.2022A new workflow for human-AI collaboration in citizen science
P25Zhang et al.2022CrowdOptim: A crowd-driven neural network hyperparameter optimization approach to AI-based smart urban sensing
Table A2. Distribution of publications per venue.
Table A2. Distribution of publications per venue.
Conference ProceedingsAAAI Conference on Artificial Intelligence
AAAI Conference on Human Computation and Crowdsourcing (4)
ACM Conference on Human Factors in Computing Systems (3)
ACM Conference on Information Technology for Social Good
ACM Web Conference
International Joint Conference on Artificial Intelligence
Journal/TransactionsACM Transactions on Interactive Intelligent Systems
Human Computation (2)
IEEE Internet of Things Journal
IEEE Transactions on Computational Social Systems
IEEE Transactions on Intelligent Transportation Systems
Knowledge-Based Systems
Proceedings of the ACM on Human-Computer Interaction (3)
Proceedings of the ACM on Interactive, Mobile, Wearable, and Ubiquitous Technologies
Proceedings of the National Academy of Sciences (3)

References

  1. Lofi, C.; El Maarry, K. Design patterns for hybrid algorithmic-crowdsourcing workflows. In Proceedings of the IEEE 16th Conference on Business Informatics, Geneva, Switzerland, 14–17 July 2014; pp. 1–8. [Google Scholar]
  2. Heim, E.; Roß, T.; Seitel, A.; März, K.; Stieltjes, B.; Eisenmann, M.; Lebert, J.; Metzger, J.; Sommer, G.; Sauter, A.W.; et al. Large-scale medical image annotation with crowd-powered algorithms. J. Med. Imaging 2018, 5, 034002. [Google Scholar] [CrossRef] [PubMed]
  3. Vargas-Santiago, M.; Monroy, R.; Ramirez-Marquez, J.E.; Zhang, C.; Leon-Velasco, D.A.; Zhu, H. Complementing solutions to optimization problems via crowdsourcing on video game plays. Appl. Sci. 2020, 10, 8410. [Google Scholar] [CrossRef]
  4. Bharadwaj, A.; Gwizdala, D.; Kim, Y.; Luther, K.; Murali, T.M. Flud: A hybrid crowd–algorithm approach for visualizing biological networks. ACM Trans. Comput. Interact. 2022, 29, 1–53. [Google Scholar] [CrossRef]
  5. Grudin, J.; Poltrock, S. Taxonomy and theory in computer supported cooperative work. Oxf. Handb. Organ. Psychol. 2012, 2, 1323–1348. [Google Scholar] [CrossRef]
  6. Nickerson, R.C.; Varshney, U.; Muntermann, J. A method for taxonomy development and its application in information systems. Eur. J. Inf. Syst. 2013, 22, 336–359. [Google Scholar] [CrossRef]
  7. Harris, A.M.; Gómez-Zará, D.; DeChurch, L.A.; Contractor, N.S. Joining together online: The trajectory of CSCW scholarship on group formation. Proc. ACM Hum.-Comput. Interact. 2019, 3, 1–27. [Google Scholar] [CrossRef]
  8. McGrath, J.E. Groups: Interaction and Performance; Prentice-Hall: Englewood Cliffs, NJ, USA, 1984. [Google Scholar]
  9. Shaw, M.E. Scaling group tasks: A method for dimensional analysis. JSAS Cat. Sel. Doc. Psychol. 1973, 3, 8. [Google Scholar]
  10. Modaresnezhad, M.; Iyer, L.; Palvia, P.; Taras, V. Information technology (IT) enabled crowdsourcing: A conceptual framework. Inf. Process. Manag. 2020, 57, 102135. [Google Scholar] [CrossRef]
  11. Bhatti, S.S.; Gao, X.; Chen, G. General framework, opportunities and challenges for crowdsourcing techniques: A comprehensive survey. J. Syst. Softw. 2020, 167, 110611. [Google Scholar] [CrossRef]
  12. Johansen, R. Groupware: Computer Support for Business Teams; The Free Press: New York, NY, USA, 1988. [Google Scholar]
  13. Lee, C.P.; Paine, D. From the matrix to a model of coordinated action (MoCA): A conceptual framework of and for CSCW. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing, Vancouver, BC, Canada, 14–18 March 2015; pp. 179–194. [Google Scholar]
  14. Renyi, M.; Gaugisch, P.; Hunck, A.; Strunck, S.; Kunze, C.; Teuteberg, F. Uncovering the complexity of care networks—Towards a taxonomy of collaboration complexity in homecare. Comput. Support. Cooperative Work. (CSCW) 2022, 31, 517–554. [Google Scholar] [CrossRef]
  15. Thomer, A.K.; Twidale, M.B.; Yoder, M.J. Transforming taxonomic interfaces: “Arm’s length” cooperative work and the maintenance of a long-lived classification system. Proc. ACM Hum.-Comput. Interact. 2018, 2, 1–23. [Google Scholar] [CrossRef]
  16. Akata, Z.; Balliet, D.; de Rijke, M.; Dignum, F.; Dignum, V.; Eiben, G.; Fokkens, A.; Grossi, D.; Hindriks, K.V.; Hoos, H.H.; et al. A research agenda for hybrid intelligence: Augmenting human intellect with collaborative, adaptive, responsible, and explainable artificial intelligence. Computer 2020, 53, 18–28. [Google Scholar] [CrossRef]
  17. Pescetelli, N. A brief taxonomy of hybrid intelligence. Forecasting 2021, 3, 633–643. [Google Scholar] [CrossRef]
  18. Dellermann, D.; Calma, A.; Lipusch, N.; Weber, T.; Weigel, S.; Ebel, P. The future of human-AI collaboration: A taxonomy of design knowledge for hybrid intelligence systems. In Proceedings of the 52nd Hawaii International Conference on System Sciences, Maui, HI, USA, 8–11 January 2019; pp. 274–283. [Google Scholar]
  19. Dubey, A.; Abhinav, K.; Jain, S.; Arora, V.; Puttaveerana, A. HACO: A framework for developing human-AI teaming. In Proceedings of the 13th Innovations in Software Engineering Conference, Jabalpur, India, 27–29 February 2020; pp. 1–9. [Google Scholar]
  20. Littmann, M.; Suomela, T. Crowdsourcing, the great meteor storm of 1833, and the founding of meteor science. Endeavour 2014, 38, 130–138. [Google Scholar] [CrossRef]
  21. Corney, J.R.; Torres-Sánchez, C.; Jagadeesan, A.P.; Regli, W.C. Outsourcing labour to the cloud. Int. J. Innovation Sustain. Dev. 2009, 4, 294–313. [Google Scholar] [CrossRef]
  22. Rouse, A.C. A preliminary taxonomy of crowdsourcing. In Proceedings of the Australasian Conference on Information Systems, Brisbane, Australia, 1–3 December 2010; Volume 76. [Google Scholar]
  23. Malone, T.W.; Laubacher, R.; Dellarocas, C. The collective intelligence genome. IEEE Eng. Manag. Rev. 2010, 38, 38–52. [Google Scholar] [CrossRef]
  24. Zwass, V. Co-creation: Toward a taxonomy and an integrated research perspective. Int. J. Electron. Commer. 2010, 15, 11–48. [Google Scholar] [CrossRef]
  25. Doan, A.; Ramakrishnan, R.; Halevy, A.Y. Crowdsourcing systems on the world-wide web. Commun. ACM 2011, 54, 86–96. [Google Scholar] [CrossRef]
  26. Saxton, G.D.; Oh, O.; Kishore, R. Rules of crowdsourcing: Models, issues, and systems of control. Inf. Syst. Management 2013, 30, 2–20. [Google Scholar] [CrossRef]
  27. Nakatsu, R.T.; Grossman, E.B.; Iacovou, C.L. A taxonomy of crowdsourcing based on task complexity. J. Inf. Sci. 2014, 40, 823–834. [Google Scholar] [CrossRef]
  28. Gadiraju, U.; Kawase, R.; Dietze, S. A taxonomy of microtasks on the web. In Proceedings of the 25th ACM Conference on Hypertext and Social Media, Santiago, Chile, 1–4 September 2014; pp. 218–223. [Google Scholar]
  29. Hosseini, M.; Shahri, A.; Phalp, K.; Taylor, J.; Ali, R. Crowdsourcing: A taxonomy and systematic mapping study. Comput. Sci. Rev. 2015, 17, 43–69. [Google Scholar] [CrossRef] [Green Version]
  30. Alabduljabbar, R.; Al-Dossari, H. Towards a classification model for tasks in crowdsourcing. In Proceedings of the Second International Conference on Internet of Things and Cloud Computing, Cambridge, UK, 22–23 March 2017; pp. 1–7. [Google Scholar]
  31. Chen, Q.; Magnusson, M.; Björk, J. Exploring the effects of problem- and solution-related knowledge sharing in internal crowdsourcing. J. Knowl. Manag. 2022, 26, 324–347. [Google Scholar] [CrossRef]
  32. Chilton, L.B.; Little, G.; Edge, D.; Weld, D.S.; Landay, J.A. Cascade: Crowdsourcing taxonomy creation. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Paris, France, 27 April–2 May 2013; pp. 1999–2008. [Google Scholar]
  33. Sharif, A.; Gopal, P.; Saugstad, M.; Bhatt, S.; Fok, R.; Weld, G.; Dey, K.A.M.; Froehlich, J.E. Experimental crowd+AI approaches to track accessibility features in sidewalk intersections over time. In Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility, Virtual Event, 18–22 October 2021; pp. 1–5. [Google Scholar]
  34. Zhang, D.Y.; Huang, Y.; Zhang, Y.; Wang, D. Crowd-assisted disaster scene assessment with human-AI interactive attention. In Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; pp. 2717–2724. [Google Scholar]
  35. Kaspar, A.; Patterson, G.; Kim, C.; Aksoy, Y.; Matusik, W.; Elgharib, M. Crowd-guided ensembles: How can we choreograph crowd workers for video segmentation? In Proceedings of the CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018. [Google Scholar]
  36. Zhang, Y.; Zong, R.; Kou, Z.; Shang, L.; Wang, D. CollabLearn: An uncertainty-aware crowd-AI collaboration system for cultural heritage damage assessment. IEEE Trans. Comput. Soc. Syst. 2021, 9, 1515–1529. [Google Scholar] [CrossRef]
  37. Maier-Hein, L.; Ross, T.; Gröhl, J.; Glocker, B.; Bodenstedt, S.; Stock, C.; Heim, E.; Götz, M.; Wirkert, S.J.; Kenngott, H.; et al. Crowd-algorithm collaboration for large-scale endoscopic image annotation with confidence. In Proceedings of the 19th International Conference on Medical Image Computing and Computer-Assisted Intervention, Athens, Greece, 17–21 October 2016; pp. 616–623. [Google Scholar]
  38. Mohanty, V.; Thames, D.; Mehta, S.; Luther, K. Photo Sleuth: Combining human expertise and face recognition to identify historical portraits. In Proceedings of the 24th International Conference on Intelligent User Interfaces, Marina del Ray, CA, USA, 17–20 March 2019; pp. 547–557. [Google Scholar]
  39. Huang, T.H.; Chang, J.C.; Bigham, J.P. Evorus: A crowd-powered conversational assistant built to automate itself over time. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018; p. 295. [Google Scholar]
  40. Guo, A.; Jain, A.; Ghose, S.; Laput, G.; Harrison, C.; Bigham, J.P. Crowd-AI camera sensing in the real world. Proc. ACM Interactive, Mobile, Wearable Ubiquitous Technol. 2018, 2, 1–20. [Google Scholar] [CrossRef]
  41. Correia, A.; Paredes, H.; Schneider, D.; Jameel, S.; Fonseca, B. Towards hybrid crowd-AI centered systems: Developing an integrated framework from an empirical perspective. In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, Bari, Italy, 6–9 October 2019; pp. 4013–4018. [Google Scholar]
  42. Xu, W.; Dainoff, M.J.; Ge, L.; Gao, Z. Transitioning to human interaction with AI systems: New challenges and opportunities for HCI professionals to enable human-centered AI. Int. J. Human–Computer Interact. 2022, 39, 494–518. [Google Scholar] [CrossRef]
  43. Colazo, M.; Alvarez-Candal, A.; Duffard, R. Zero-phase angle asteroid taxonomy classification using unsupervised machine learning algorithms. Astron. Astrophys. 2022, 666, A77. [Google Scholar] [CrossRef]
  44. Mock, F.; Kretschmer, F.; Kriese, A.; Böcker, S.; Marz, M. Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proc. Natl. Acad. Sci. USA 2022, 119, e2122636119. [Google Scholar] [CrossRef]
  45. Rasch, R.F. The nature of taxonomy. Image J. Nurs. Scholarsh. 1987, 19, 147–149. [Google Scholar] [CrossRef]
  46. Tricco, A.C.; Lillie, E.; Zarin, W.; O’Brien, K.; Colquhoun, H.; Kastner, M.; Levac, D.; Ng, C.; Sharpe, J.P.; Wilson, K.; et al. A scoping review on the conduct and reporting of scoping reviews. BMC Med. Res. Methodol. 2016, 16, 15. [Google Scholar] [CrossRef]
  47. Sokal, R.R. Phenetic taxonomy: Theory and methods. Annu. Rev. Ecol. Syst. 1986, 17, 423–442. [Google Scholar] [CrossRef]
  48. Oberländer, A.M.; Lösser, B.; Rau, D. Taxonomy research in information systems: A systematic assessment. In Proceedings of the 27th European Conference on Information Systems, Stockholm and Uppsala, Sweden, 8–14 June 2019. [Google Scholar]
  49. Gerber, A. Computational ontologies as classification artifacts in IS research. In Proceedings of the 24th Americas Conference on Information Systems, New Orleans, LA, USA, 16–18 August 2018. [Google Scholar]
  50. Webster, J.; Watson, R.T. Analyzing the past to prepare for the future: Writing a literature review. MIS Q. 2002, 26, 2. [Google Scholar]
  51. Schmidt-Kraepelin, M.; Thiebes, S.; Tran, M.C.; Sunyaev, A. What’s in the game? Developing a taxonomy of gamification concepts for health apps. In Proceedings of the 51st Hawaii International Conference on System Sciences, Hilton Waikoloa Village, HI, USA, 3–6 January 2018; pp. 1–10. [Google Scholar]
  52. Sai, A.R.; Buckley, J.; Fitzgerald, B.; Le Gear, A. Taxonomy of centralization in public blockchain systems: A systematic literature review. Inf. Process. Manag. 2021, 58, 102584. [Google Scholar] [CrossRef]
  53. Andraschko, L.; Wunderlich, P.; Veit, D.; Sarker, S. Towards a taxonomy of smart home technology: A preliminary understanding. In Proceedings of the 42nd International Conference on Information Systems, Austin, TX, USA, 12–15 December 2021. [Google Scholar]
  54. Larsen, K.R.; Hovorka, D.; Dennis, A.; West, J.D. Understanding the elephant: The discourse approach to boundary identification and corpus construction for theory review articles. J. Assoc. Inf. Syst. 2019, 20, 15. [Google Scholar] [CrossRef]
  55. Elliott, J.H.; Turner, T.; Clavisi, O.; Thomas, J.; Higgins, J.P.T.; Mavergames, C.; Gruen, R.L. Living systematic reviews: An emerging opportunity to narrow the evidence-practice gap. PLoS Med. 2014, 11, e1001603. [Google Scholar] [CrossRef]
  56. Singh, V.K.; Singh, P.; Karmakar, M.; Leta, J.; Mayr, P. The journal coverage of Web of Science, Scopus and Dimensions: A comparative analysis. Scientometrics 2021, 126, 5113–5142. [Google Scholar] [CrossRef]
  57. Kittur, A.; Nickerson, J.V.; Bernstein, M.; Gerber, E.; Shaw, A.; Zimmerman, J.; Lease, M.; Horton, J.J. The future of crowd work. In Proceedings of the ACM Conference on Computer Supported Cooperative Work, San Antonio, TX, USA, 23–27 February 2013; pp. 1301–1318. [Google Scholar]
  58. Zhang, D.; Zhang, Y.; Li, Q.; Plummer, T.; Wang, D. CrowdLearn: A crowd-AI hybrid system for deep learning-based damage assessment applications. In Proceedings of the 39th IEEE International Conference on Distributed Computing Systems, Dallas, TX, USA, 7–10 July 2019; pp. 1221–1232. [Google Scholar]
  59. Landolt, S.; Wambsganss, T.; Söllner, M. A taxonomy for deep learning in natural language processing. In Proceedings of the 54th Hawaii International Conference on System Sciences, Kauai, HI, USA, 5 January 2021; pp. 1061–1070. [Google Scholar]
  60. Straus, S.G. Testing a typology of tasks: An empirical validation of McGrath’s (1984) group task circumplex. Small Group Research 1999, 30, 166–187. [Google Scholar] [CrossRef]
  61. Chesbrough, H.W. Open Innovation: The New Imperative for Creating and Profiting from Technology; Harvard Business Press: Boston, MA, USA, 2003. [Google Scholar]
  62. Karachiwalla, R.; Pinkow, F. Understanding crowdsourcing projects: A review on the key design elements of a crowdsourcing initiative. Creativity Innov. Manag. 2021, 30, 563–584. [Google Scholar] [CrossRef]
  63. Hemmer, P.; Schemmer, M.; Vössing, M.; Kühl, N. Human-AI complementarity in hybrid intelligence systems: A structured literature review. In Proceedings of the 25th Pacific Asia Conference on Information Systems, Virtual Event, Dubai, United Arab Emirates, 12–14 July 2021; p. 78. [Google Scholar]
  64. Holstein, K.; Aleven, V.; Rummel, N. A conceptual framework for human-AI hybrid adaptivity in education. In Proceedings of the 21st International Conference on Artificial Intelligence in Education, Ifrane, Morocco, 6–10 July 2020; pp. 240–254. [Google Scholar]
  65. Siemon, D. Elaborating team roles for artificial intelligence-based teammates in human-AI collaboration. Group Decis. Negot. 2022, 31, 871–912. [Google Scholar] [CrossRef]
  66. Weber, E.; Marzo, N.; Papadopoulos, D.P.; Biswas, A.; Lapedriza, A.; Ofli, F.; Imran, M.; Torralba, A. Detecting natural disasters, damage, and incidents in the wild. In Proceedings of the 16th European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; pp. 331–350. [Google Scholar]
  67. Vaughan, J.W. Making better use of the crowd: How crowdsourcing can advance machine learning research. J. Mach. Learn. Res. 2017, 18, 7026–7071. [Google Scholar]
  68. Hamadi, R.; Ghazzai, H.; Massoud, Y. A generative adversarial network for financial advisor recruitment in smart crowdsourcing platforms. Appl. Sci. 2022, 12, 9830. [Google Scholar] [CrossRef]
  69. Alter, S. Work system theory: Overview of core concepts, extensions, and challenges for the future. J. Assoc. Inf. Syst. 2013, 14, 2. [Google Scholar] [CrossRef]
  70. Venumuddala, V.R.; Kamath, R. Work systems in the Indian information technology (IT) industry delivering artificial intelligence (AI) solutions and the challenges of work from home. Inf. Syst. Front. 2022, 1–25. [Google Scholar] [CrossRef] [PubMed]
  71. Nardi, B. Context and Consciousness: Activity Theory and Human-Computer Interaction; MIT Press: Cambridge, MA, USA, 1996. [Google Scholar]
  72. Neale, D.C.; Carroll, J.M.; Rosson, M.B. Evaluating computer-supported cooperative work: Models and frameworks. In Proceedings of the ACM Conference on Computer Supported Cooperative Work, Chicago, IL, USA, 6–10 November 2004; pp. 112–121. [Google Scholar]
  73. Lee, S.W.; Krosnick, R.; Park, S.Y.; Keelean, B.; Vaidya, S.; O’Keefe, S.D.; Lasecki, W.S. Exploring real-time collaboration in crowd-powered systems through a UI design tool. Proc. ACM Human-Computer Interact. 2018, 2, 1–23. [Google Scholar] [CrossRef]
  74. Wang, X.; Ding, L.; Wang, Q.; Xie, J.; Wang, T.; Tian, X.; Guan, Y.; Wang, X. A picture is worth a thousand words: Share your real-time view on the road. IEEE Trans. Veh. Technol. 2016, 66, 2902–2914. [Google Scholar] [CrossRef]
  75. Agapie, E.; Teevan, J.; Monroy-Hernández, A. Crowdsourcing in the field: A case study using local crowds for event reporting. In Proceedings of the Third AAAI Conference on Human Computation and Crowdsourcing, San Diego, CA, USA, 8–11 November 2015; pp. 2–11. [Google Scholar]
  76. Lafreniere, B.J.; Grossman, T.; Anderson, F.; Matejka, J.; Kerrick, H.; Nagy, D.; Vasey, L.; Atherton, E.; Beirne, N.; Coelho, M.H.; et al. Crowdsourced fabrication. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology, Tokyo, Japan, 16–19 October 2016; pp. 15–28. [Google Scholar]
  77. Aristeidou, M.; Scanlon, E.; Sharples, M. Profiles of engagement in online communities of citizen science participation. Comput. Hum. Behav. 2017, 74, 246–256. [Google Scholar] [CrossRef]
  78. Bouwer, A. Under which conditions are humans motivated to delegate tasks to AI? A taxonomy on the human emotional state driving the motivation for AI delegation. In Marketing and Smart Technologies; Springer: Singapore, 2022; pp. 37–53. [Google Scholar]
  79. Lubars, B.; Tan, C. Ask not what AI can do, but what AI should do: Towards a framework of task delegability. In Proceedings of the Annual Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 8–14 December 2019; pp. 57–67. [Google Scholar]
  80. Sun, Y.; Ma, X.; Ye, K.; He, L. Investigating crowdworkers’ identify, perception and practices in micro-task crowdsourcing. Proc. ACM Hum.-Comput. Interact. 2022, 6, 1–20. [Google Scholar] [CrossRef]
  81. Khan, V.J.; Papangelis, K.; Lykourentzou, I.; Markopoulos, P. Macrotask Crowdsourcing—Engaging the Crowds to Address Complex Problems; Human-Computer Interaction Series; Springer: Cham, Switzerland, 2019. [Google Scholar]
  82. Teevan, J. The future of microwork. XRDS Crossroads ACM Mag. Stud. 2016, 23, 26–29. [Google Scholar] [CrossRef]
  83. Zulfiqar, M.; Malik, M.N.; Khan, H.H. Microtasking activities in crowdsourced software development: A systematic literature review. IEEE Access 2022, 10, 24721–24737. [Google Scholar] [CrossRef]
  84. Rahman, H.; Roy, S.B.; Thirumuruganathan, S.; Amer-Yahia, S.; Das, G. Optimized group formation for solving collaborative tasks. VLDB J. 2018, 28, 1–23. [Google Scholar] [CrossRef]
  85. Schmitz, H.; Lykourentzou, I. Online sequencing of non-decomposable macrotasks in expert crowdsourcing. ACM Trans. Soc. Comput. 2018, 1, 1–33. [Google Scholar] [CrossRef]
  86. Jin, Y.; Carman, M.; Zhu, Y.; Xiang, Y. A technical survey on statistical modelling and design methods for crowdsourcing quality control. Artif. Intell. 2020, 287, 103351. [Google Scholar] [CrossRef]
  87. Moayedikia, A.; Ghaderi, H.; Yeoh, W. Optimizing microtask assignment on crowdsourcing platforms using Markov chain Monte Carlo. Decis. Support Syst. 2020, 139, 113404. [Google Scholar] [CrossRef]
  88. Amershi, S.; Weld, D.; Vorvoreanu, M.; Fourney, A.; Nushi, B.; Collisson, P.; Suh, J.; Iqbal, S.T.; Bennett, P.N.; Inkpen, K.; et al. Guidelines for human-AI interaction. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Glasgow, Scotland, UK, 4–9 May 2019. [Google Scholar]
  89. Rafner, J.; Gajdacz, M.; Kragh, G.; Hjorth, A.; Gander, A.; Palfi, B.; Berditchevskiaia, A.; Grey, F.; Gal, K.; Segal, A.; et al. Mapping citizen science through the lens of human-centered AI. Hum. Comput. 2022, 9, 66–95. [Google Scholar] [CrossRef]
  90. Shneiderman, B. Bridging the gap between ethics and practice: Guidelines for reliable, safe, and trustworthy human-centered AI systems. ACM Trans. Interact. Intell. Syst. 2020, 10, 1–31. [Google Scholar] [CrossRef]
  91. Ramírez, J.; Sayin, B.; Baez, M.; Casati, F.; Cernuzzi, L.; Benatallah, B.; Demartini, G. On the state of reporting in crowdsourcing experiments and a checklist to aid current practices. Proc. ACM Hum.-Comput. Interact. 2021, 5, 1–34. [Google Scholar] [CrossRef]
  92. Robert, L.; Romero, D.M. Crowd size, diversity and performance. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, Seoul, Republic of Korea, 18–23 April 2015; pp. 1379–1382. [Google Scholar]
  93. Blandford, A. Intelligent interaction design: The role of human-computer interaction research in the design of intelligent systems. Expert Syst. 2001, 18, 3–18. [Google Scholar] [CrossRef]
  94. Huang, K.; Zhou, J.; Chen, S. Being a solo endeavor or team worker in crowdsourcing contests? It is a long-term decision you need to make. Proc. ACM Hum.-Comput. Interact. 2022, 6, 1–32. [Google Scholar] [CrossRef]
  95. Venkatagiri, S.; Thebault-Spieker, J.; Kohler, R.; Purviance, J.; Mansur, R.S.; Luther, K. GroundTruth: Augmenting expert image geolocation with crowdsourcing and shared representations. Proc. ACM Hum.-Comput. Interact. 2019, 3, 1–30. [Google Scholar] [CrossRef]
  96. Zhou, S.; Valentine, M.; Bernstein, M.S. In search of the dream team: Temporally constrained multi-armed bandits for identifying effective team structures. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018. [Google Scholar]
  97. Gray, M.L.; Suri, S.; Ali, S.S.; Kulkarni, D. The crowd is a collaborative network. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing, San Francisco, CA, USA, 27 February–2 March 2016; pp. 134–147. [Google Scholar]
  98. Zhang, X.; Zhang, W.; Zhao, Y.; Zhu, Q. Imbalanced volunteer engagement in cultural heritage crowdsourcing: A task-related exploration based on causal inference. Inf. Process. Manag. 2022, 59, 103027. [Google Scholar] [CrossRef]
  99. McNeese, N.J.; Demir, M.; Cooke, N.J.; She, M. Team situation awareness and conflict: A study of human–machine teaming. J. Cogn. Eng. Decis. Mak. 2021, 15, 83–96. [Google Scholar] [CrossRef]
  100. Dafoe, A.; Bachrach, Y.; Hadfield, G.; Horvitz, E.; Larson, K.; Graepel, T. Cooperative AI: Machines must learn to find common ground. Nature 2021, 593, 33–36. [Google Scholar] [CrossRef]
  101. Alorwu, A.; Savage, S.; van Berkel, N.; Ustalov, D.; Drutsa, A.; Oppenlaender, J.; Bates, O.; Hettiachchi, D.; Gadiraju, U.; Gonçalves, J.; et al. REGROW: Reimagining global crowdsourcing for better human-AI collaboration. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Extended Abstracts, New Orleans, LA, USA, 29 April–5 May 2022; pp. 1–7. [Google Scholar]
  102. Santos, C.A.; Baldi, A.M.; de Assis Neto, F.R.; Barcellos, M.P. Essential elements, conceptual foundations and workflow design in crowd-powered projects. J. Inf. Sci. 2022. [Google Scholar] [CrossRef]
  103. Valentine, M.A.; Retelny, D.; To, A.; Rahmati, N.; Doshi, T.; Bernstein, M.S. Flash organizations: Crowdsourcing complex work by structuring crowds as organizations. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Denver, CO, USA, 6–11 May 2017; pp. 3523–3537. [Google Scholar]
  104. Kamar, E. Directions in hybrid intelligence: Complementing AI systems with human intelligence. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, New York, NY, USA, 9–15 July 2016; pp. 4070–4073. [Google Scholar]
  105. Tocchetti, A.; Corti, L.; Brambilla, M.; Celino, I. EXP-Crowd: A gamified crowdsourcing framework for explainability. Front. Artif. Intell. 2022, 5, 826499. [Google Scholar] [CrossRef] [PubMed]
  106. Barbosa, N.M.; Chen, M. Rehumanized crowdsourcing: A labeling framework addressing bias and ethics in machine learning. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Glasgow, Scotland, UK, 4–9 May 2019; pp. 1–12. [Google Scholar]
  107. Basker, T.; Tottler, D.; Sanguet, R.; Muffbur, J. Artificial intelligence and human learning: Improving analytic reasoning via crowdsourcing and structured analytic techniques. Comput. Educ. 2022, 3, 1003056. [Google Scholar]
  108. Mirbabaie, M.; Brendel, A.B.; Hofeditz, L. Ethics and AI in information systems research. Commun. Assoc. Inf. Syst. 2022, 50, 38. [Google Scholar] [CrossRef]
  109. Sundar, S.S. Rise of machine agency: A framework for studying the psychology of human–AI interaction (HAII). J. Comput. Commun. 2020, 25, 74–88. [Google Scholar] [CrossRef]
  110. Liu, B. In AI we trust? Effects of agency locus and transparency on uncertainty reduction in human–AI interaction. J. Comput. Commun. 2021, 26, 384–402. [Google Scholar] [CrossRef]
  111. Kang, H.; Lou, C. AI agency vs. human agency: Understanding human–AI interactions on TikTok and their implications for user engagement. J. Comput. Commun. 2022, 27, zmac014. [Google Scholar] [CrossRef]
  112. Daniel, F.; Kucherbaev, P.; Cappiello, C.; Benatallah, B.; Allahbakhsh, M. Quality control in crowdsourcing: A survey of quality attributes, assessment techniques, and assurance actions. ACM Comput. Surv. 2018, 51, 1–40. [Google Scholar] [CrossRef]
  113. Pedersen, J.; Kocsis, D.; Tripathi, A.; Tarrell, A.; Weerakoon, A.; Tahmasbi, N.; Xiong, J.; Deng, W.; Oh, O.; de Vreede, G.-J. Conceptual foundations of crowdsourcing: A review of IS research. In Proceedings of the 46th Hawaii International Conference on System Sciences, Wailea, HI, USA, 7–10 January 2013; pp. 579–588. [Google Scholar]
  114. Hansson, K.; Ludwig, T. Crowd dynamics: Conflicts, contradictions, and community in crowdsourcing. Comput. Support. Coop. Work. 2019, 28, 791–794. [Google Scholar] [CrossRef]
  115. Gimpel, H.; Graf-Seyfried, V.; Laubacher, R.; Meindl, O. Towards artificial intelligence augmenting facilitation: AI affordances in macro-task crowdsourcing. Group Decis. Negot. 2023, 1–50. [Google Scholar] [CrossRef]
  116. Wu, T.; Terry, M.; Cai, C.J. AI chains: Transparent and controllable human-AI interaction by chaining large language model prompts. In Proceedings of the CHI Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 29 April–5 May 2022. [Google Scholar]
  117. Kobayashi, M.; Wakabayashi, K.; Morishima, A. Human+AI crowd task assignment considering result quality requirements. In Proceedings of the Ninth AAAI Conference on Human Computation and Crowdsourcing, Virtual, 14–18 November 2021; pp. 97–107. [Google Scholar]
  118. Eggert, M.; Alberts, J. Frontiers of business intelligence and analytics 3.0: A taxonomy-based literature review and research agenda. Bus. Res. 2020, 13, 685–739. [Google Scholar] [CrossRef]
  119. Chan, J.; Chang, J.C.; Hope, T.; Shahaf, D.; Kittur, A. SOLVENT: A mixed initiative system for finding analogies between research papers. Proc. ACM Hum.-Comput. Interact. 2018, 2, 1–21. [Google Scholar] [CrossRef]
  120. Zhang, Y.; Shang, L.; Zong, R.; Wang, Z.; Kou, Z.; Wang, D. StreamCollab: A streaming crowd-AI collaborative system to smart urban infrastructure monitoring in social sensing. In Proceedings of the Ninth AAAI Conference on Human Computation and Crowdsourcing, Virtual, 14–18 November 2021; pp. 179–190. [Google Scholar]
  121. Yang, J.; Smirnova, A.; Yang, D.; Demartini, G.; Lu, Y.; Cudré-Mauroux, P. Scalpel-CD: Leveraging crowdsourcing and deep probabilistic modeling for debugging noisy training data. In Proceedings of the World Wide Web Conference, San Francisco, CA, USA, 13–17 May 2019; pp. 2158–2168. [Google Scholar]
  122. Schlagwein, D.; Cecez-Kecmanovic, D.; Hanckel, B. Ethical norms and issues in crowdsourcing practices: A Habermasian analysis. Inf. Syst. J. 2018, 29, 811–837. [Google Scholar] [CrossRef] [Green Version]
  123. Gadiraju, U.; Demartini, G.; Kawase, R.; Dietze, S. Crowd anatomy beyond the good and bad: Behavioral traces for crowd worker modeling and pre-selection. Comput. Support. Cooperative Work. 2018, 28, 815–841. [Google Scholar] [CrossRef]
  124. Palmer, M.S.; Huebner, S.E.; Willi, M.; Fortson, L.; Packer, C. Citizen science, computing, and conservation: How can “crowd AI” change the way we tackle large-scale ecological challenges? Hum. Comput. 2021, 8, 54–75. [Google Scholar] [CrossRef]
  125. Mannes, A. Governance, risk, and artificial intelligence. AI Mag. 2020, 41, 61–69. [Google Scholar] [CrossRef]
  126. Choung, H.; David, P.; Ross, A. Trust and ethics in AI. AI Soc. 2022, 1–13. [Google Scholar] [CrossRef]
  127. Zheng, Q.; Tang, Y.; Liu, Y.; Liu, W.; Huang, Y. UX research on conversational human-AI interaction: A literature review of the ACM Digital Library. In Proceedings of the CHI Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 29 April–5 May 2022. [Google Scholar]
  128. Heath, C.; Svensson, M.S.; Hindmarsh, J.; Luff, P.; Vom Lehn, D. Configuring awareness. Comput. Support. Coop. Work. 2002, 11, 317–347. [Google Scholar] [CrossRef]
  129. Park, J.; Krishna, R.; Khadpe, P.; Fei-Fei, L.; Bernstein, M. AI-based request augmentation to increase crowdsourcing participation. In Proceedings of the Seventh AAAI Conference on Human Computation and Crowdsourcing, Stevenson, WA, USA, 28–30 October 2019; pp. 115–124. [Google Scholar]
  130. Star, S.L.; Ruhleder, K. Steps towards an ecology of infrastructure: Complex problems in design and access for large-scale collaborative systems. In Proceedings of the ACM Conference on Computer Supported Cooperative Work, Chapel Hill, NC, USA, 22–26 October 1994; pp. 253–264. [Google Scholar]
  131. Mosconi, G.; Korn, M.; Reuter, C.; Tolmie, P.; Teli, M.; Pipek, V. From Facebook to the neighbourhood: Infrastructuring of hybrid community engagement. Comput. Support. Coop. Work (CSCW) 2017, 26, 959–1003. [Google Scholar] [CrossRef]
  132. Ehsan, U.; Liao, Q.V.; Muller, M.; Riedl, M.O.; Weisz, J.D. Expanding explainability: Towards social transparency in AI systems. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; pp. 1–19. [Google Scholar]
  133. Thieme, A.; Cutrell, E.; Morrison, C.; Taylor, A.; Sellen, A. Interpretability as a dynamic of human-AI interaction. Interactions 2020, 27, 40–45. [Google Scholar] [CrossRef]
  134. Walzner, D.D.; Fuegener, A.; Gupta, A. Managing AI advice in crowd decision-making. In Proceedings of the International Conference on Information Systems, Copenhagen, Denmark, 9–14 December 2022; p. 1315. [Google Scholar]
  135. Anjum, S.; Verma, A.; Dang, B.; Gurari, D. Exploring the use of deep learning with crowdsourcing to annotate images. Hum. Comput. 2021, 8, 76–106. [Google Scholar] [CrossRef]
  136. Trouille, L.; Lintott, C.J.; Fortson, L.F. Citizen science frontiers: Efficiency, engagement, and serendipitous discovery with human-machine systems. Proc. Natl. Acad. Sci. USA 2019, 116, 1902–1909. [Google Scholar] [CrossRef] [PubMed]
  137. Zhou, Z.; Yatani, K. Gesture-aware interactive machine teaching with in-situ object annotations. In Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology, Bend, OR, USA, 29 October–2 November 2022; pp. 1–14. [Google Scholar]
  138. Avdic, M.; Bødker, S.; Larsen-Ledet, I. Two cases for traces: A theoretical framing of mediated joint activity. Proc. ACM Hum.-Comput. Interact. 2021, 5, 1–28. [Google Scholar] [CrossRef]
  139. Tchernavskij, P.; Bødker, S. Entangled artifacts: The meeting between a volunteer-run citizen science project and a biodiversity data platform. In Proceedings of the Nordic Human-Computer Interaction Conference, Aarhus, Denmark, 8–12 October 2022; pp. 1–13. [Google Scholar]
  140. Rzeszotarski, J.M.; Kittur, A. Instrumenting the crowd: Using implicit behavioral measures to predict task performance. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, Santa Barbara, CA, USA, 16–19 October 2011; pp. 13–22. [Google Scholar]
  141. Newman, A.; McNamara, B.; Fosco, C.; Zhang, Y.B.; Sukhum, P.; Tancik, M.; Kim, N.W.; Bylinskii, Z. TurkEyes: A web-based toolbox for crowdsourcing attention data. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; pp. 1–13. [Google Scholar]
  142. Goyal, T.; McDonnell, T.; Kutlu, M.; Elsayed, T.; Lease, M. Your behavior signals your reliability: Modeling crowd behavioral traces to ensure quality relevance annotations. In Proceedings of the Sixth AAAI Conference on Human Computation and Crowdsourcing, Zürich, Switzerland, 5–8 July 2018; pp. 41–49. [Google Scholar]
  143. Hettiachchi, D.; Van Berkel, N.; Kostakos, V.; Goncalves, J. CrowdCog: A cognitive skill based system for heterogeneous task assignment and recommendation in crowdsourcing. Proc. ACM Hum.-Comput. Interact. 2020, 4, 1–22. [Google Scholar] [CrossRef]
  144. Zimmerman, J.; Oh, C.; Yildirim, N.; Kass, A.; Tung, T.; Forlizzi, J. UX designers pushing AI in the enterprise: A case for adaptive UIs. Interactions 2020, 28, 72–77. [Google Scholar] [CrossRef]
  145. Hettiachchi, D.; Kostakos, V.; Goncalves, J. A survey on task assignment in crowdsourcing. ACM Comput. Surv. 2022, 55, 1–35. [Google Scholar] [CrossRef]
  146. Pei, W.; Yang, Z.; Chen, M.; Yue, C. Quality control in crowdsourcing based on fine-grained behavioral features. Proc. ACM Hum.-Comput. Interact. 2021, 5, 1–28. [Google Scholar] [CrossRef]
  147. Bakici, T. Comparison of crowdsourcing platforms from social-psychological and motivational perspectives. Int. J. Inf. Manag. 2020, 54, 102121. [Google Scholar] [CrossRef]
  148. Truong, N.V.-Q.; Dinh, L.C.; Stein, S.; Tran-Thanh, L.; Jennings, N.R. Efficient and adaptive incentive selection for crowdsourcing contests. Appl. Intell. 2022, 1–31. [Google Scholar] [CrossRef]
  149. Correia, A.; Jameel, S.; Paredes, H.; Fonseca, B.; Schneider, D. Hybrid machine-crowd interaction for handling complexity: Steps toward a scaffolding design framework. In Macrotask Crowdsourcing—Engaging the Crowds to Address Complex Problems; Human-Computer Interaction Series; Springer: Cham, Switzerland, 2019; pp. 149–161. [Google Scholar]
  150. Sutherland, W.; Jarrahi, M.H.; Dunn, M.; Nelson, S.B. Work precarity and gig literacies in online freelancing. Work Employ. Soc. 2019, 34, 457–475. [Google Scholar] [CrossRef]
  151. Salminen, J.; Kamel, A.M.S.; Jung, S.-G.; Mustak, M.; Jansen, B.J. Fair compensation of crowdsourcing work: The problem of flat rates. Behav. Inf. Technol. 2022, 1–22. [Google Scholar] [CrossRef]
Figure 1. Iterative taxonomy development process flow (a) and methodological details underlying the work undertaken in this study (b). Adapted from Nickerson and co-authors [6].
Figure 1. Iterative taxonomy development process flow (a) and methodological details underlying the work undertaken in this study (b). Adapted from Nickerson and co-authors [6].
Applsci 13 02198 g001
Figure 2. Taxonomy of hybrid crowd-AI systems. This taxonomic proposal integrates key conceptual dimensions of the human-centered AI framework introduced in [41] to characterize the configurations in which crowd-AI interaction occurs within the interplay between human and machine intelligence.
Figure 2. Taxonomy of hybrid crowd-AI systems. This taxonomic proposal integrates key conceptual dimensions of the human-centered AI framework introduced in [41] to characterize the configurations in which crowd-AI interaction occurs within the interplay between human and machine intelligence.
Applsci 13 02198 g002
Figure 3. Synthesis of the literature analysis based on the taxonomy proposed.
Figure 3. Synthesis of the literature analysis based on the taxonomy proposed.
Applsci 13 02198 g003
Figure 4. Example of a taxonomic scheme used to classify a crowd-AI interaction scenario [39].
Figure 4. Example of a taxonomic scheme used to classify a crowd-AI interaction scenario [39].
Applsci 13 02198 g004
Table 1. Methodological remarks extracted from primary studies reporting paid crowdsourcing experiments conducted remotely.
Table 1. Methodological remarks extracted from primary studies reporting paid crowdsourcing experiments conducted remotely.
IDExperimental SettingsPre-Selection
Mechanism(s)
Cost per HIT and
Platform(s)
Time Allotted
P15-month-long deployment and testing with real users (n = 80 crowd workers)-$0.142 (Phase-1 deployment); $0.211 (Control Phase);
MTurk; Hangoutsbot
~10 min (per conversation)
P2Ensemble method combining multiple results on individual frame segmentations and crowd-based propagated segmentation results (n = 70 crowd workers)-$0.90 (Segmentation);
$0.15 (Scribble);
MTurk
142.6 s (per frame
segmentation); 2.5 s (per method scribbles)
P34-week testing (n = 17 participants), with an unspecified number of crowd workers>95% assignment approval rate; Gold standard question sensor instances~$10/hour ($0.02 for each task performed on MTurk)~3 s (per labeled
question sensor instance)
P5Classification of potential studies for a systematic literature review (n = 147 crowd workers)>70% overall accuracy; Worker screening based on two test questions$10/hour;
MTurk
-
P6Purpose-mechanism annotation analogical search (n = 3 crowd workers per document), with an unspecified number of crowd workers>=95% acceptance rate; Training step based on a gold standard example before the task execution$30/hour (Upwork-worker 1); $20/hour (Upwork-worker 2); $10/hour ($0.70 for each task
performed on MTurk)
1.3 min (per document annotation); 4 min (overall task completion)
P9Contextual bandit algorithm and agent deployment powered by AI-based request strategies for visual question answering, with an unspecified number of crowd workersTraining step using examples and a qualifying task$12/hour;
MTurk
-
P13Performance evaluation of a crowd-AI hybrid framework through real-world datasets (n = 3 crowd workers per image in a crowd query) with an unspecified number of crowd workers>95% overall task approval rate; >=1000 HITs completed$0.20 for each worker per-image annotation;
Labelme; MTurk
-
P14A method for AI worker evaluation that uses a “divide-and-conquer” strategy for dynamic task assignment with an unspecified number of crowd workersNo strategies were deployed to target malicious workers240$ for 2 h of labor;
MTurk
-
P16Evaluation of hybrid crowd-algorithmic workflows for image annotation based on time completion and quality, with an unspecified number of crowd workers>92% approval rate; >500 HITs completed$9/hour ($0.20 for each task performed on MTurk)80 s (per HIT
completion)
P17Evaluation of crowd responses and computational performance in identifying damages from urban infrastructure imagery data (n = 2 to 5 crowd workers per query), with an unspecified number of crowd workers>95% overall task approval rate; >=1000 HITs completed$0.05 for each worker per image classification;
MTurk
0.0227 (average time taken to accomplish each streaming urban monitoring task using a hybrid crowd-AI model)
P18Evaluation of model performance to re-query or not crowdsourced initializations for bounding-box annotations (n = 26 crowd workers located in the United States)A gold standard for identifying inattentive workers; Annotators with more than 15% incorrect annotations were disregarded~$12/hour ($0.06 for each bounding-box annotation);
MTurk
-
P19Randomized online experiments comparing the performance of a computer vision model and a crowd of 15,016 individuals in tasks related to the detection of authentic vs. deepfake videos (n = 5524 participants: Experiment 1; n = 9492 participants: Experiment 2)-$7.28/hour plus bonus payments of 20% to the top participants; Experiment hosted on an external website (i.e., Detect Fakes); 304 participants recruited from Prolific15 min (per task
completion)
P20Performance evaluation of a dynamic optimal neural architecture searching framework that leverages crowdsourcing for handling disaster damage assessment problems with an unspecified number of crowd workers>95% overall task approval rate; >=1000 HITs completed$0.20 for each crowd worker per-image labeling;
MTurk
0.0198 s (average time with varying crowd query frequency);
0.0201 s (average time with varying numbers of crowd workers)
P21Evaluation of a hybrid framework combining expert and crowd intelligence with explainable AI for misinformation detection
(n = 3 crowd workers per HIT plus 5 experts), with an unspecified number of crowd workers
>=95% task acceptance rateUnspecified amount above the minimum requirement on MTurk ($0.01 per assignment)61 s (average time of task completion); 21.4 h (total waiting time to collect and aggregate contributions from crowd
workers)
P25Development of a crowd-AI system for optimizing smart urban sensing applications (n = 3 to 7 crowd workers per task), with an unspecified number of crowd workers>95% overall task approval rate; >=1000 HITs completed$0.05 for each crowd worker per image classification;
MTurk
-
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Correia, A.; Grover, A.; Schneider, D.; Pimentel, A.P.; Chaves, R.; de Almeida, M.A.; Fonseca, B. Designing for Hybrid Intelligence: A Taxonomy and Survey of Crowd-Machine Interaction. Appl. Sci. 2023, 13, 2198. https://doi.org/10.3390/app13042198

AMA Style

Correia A, Grover A, Schneider D, Pimentel AP, Chaves R, de Almeida MA, Fonseca B. Designing for Hybrid Intelligence: A Taxonomy and Survey of Crowd-Machine Interaction. Applied Sciences. 2023; 13(4):2198. https://doi.org/10.3390/app13042198

Chicago/Turabian Style

Correia, António, Andrea Grover, Daniel Schneider, Ana Paula Pimentel, Ramon Chaves, Marcos Antonio de Almeida, and Benjamim Fonseca. 2023. "Designing for Hybrid Intelligence: A Taxonomy and Survey of Crowd-Machine Interaction" Applied Sciences 13, no. 4: 2198. https://doi.org/10.3390/app13042198

APA Style

Correia, A., Grover, A., Schneider, D., Pimentel, A. P., Chaves, R., de Almeida, M. A., & Fonseca, B. (2023). Designing for Hybrid Intelligence: A Taxonomy and Survey of Crowd-Machine Interaction. Applied Sciences, 13(4), 2198. https://doi.org/10.3390/app13042198

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop