Abstract
For decades, work psychologists have studied the automation of work processes to establish human-centered work design. Moving from automation to autonomy through software, systems, or tools that support (or supplement) the human worker has specific consequences for field applications, for example, in the maritime sector. Autonomous systems are characterized by a high degree of self-governance concerning adaptation, communication, and decision-making. From a psychological perspective, maritime autonomy means that autonomous agents and humans work interdependently as a human-autonomy team.
In this chapter, we first introduce the concept of human-autonomy teaming (HAT) in the context of maritime work settings. Second, we elaborate on three psychological perspectives on HAT (i.e., level of autonomy, system trust, system knowledge/features) spotlighting a maritime example of HAT in ship inspection. Qualitative interview results from maritime and technological experts give insights into the complex pattern of possible opportunities and hindrances when facing agent autonomy in maritime application fields. Finally, we outline future trends in HAT increasingly needed due to continuous technical improvement. Maritime autonomy is no static goal, but an adaptive team characteristic impacted by human and situational demands with the potential for collaborative learning, challenges for leadership, and open questions regarding the role of responsibility.
You have full access to this open access chapter, Download chapter PDF
Similar content being viewed by others
Keywords
1 Introduction: Human-Autonomy Teaming in Maritime Contexts
The concept of human-autonomy teaming (HAT) is used to describe humans and intelligent, autonomous agents working interdependently toward a common goal (O’Neill et al., 2022). HAT as a new form of collaboration is the focus of research under multiple heterogeneous terminologies such as human-agent teams (Chen et al., 2011), human–robot collaboration and hybrid teams (Straube & Schwartz, 2016), human–robot teaming (Endsley, 2017), or socio-digital teams (Ellwart & Kluge, 2019). In this chapter, we use the term HAT consistently.
HAT has been described as at least one human working cooperatively with at least one autonomous agent (McNeese et al., 2018). An autonomous agent is understood as a computer entity or robot with a partial or high degree of autonomy in terms of decision-making, adaptation, and communication (O’Neill et al., 2022). HAT provides new qualitative challenges for teamwork compared to traditional human–human teams (HHT) (Ellwart & Schauffel, 2021). Autonomy is capable of decision-making independent of human control. Chen et al. (2018) distinguish between autonomy at rest (e.g., intelligent software systems) and autonomy in motion (e.g., robots). Because functional HAT complementary combines the strengths of humans and machines (i.e., human intelligence and artificial intelligence, human and agent skills), HAT can achieve complex goals that are unreachable by either humans or machines alone. For example, the inspection of ship hulls needs to be time- and cost-efficient, precise, safe, and highly reliable—when humans and machines interdependently combine their expertise and strengths these goals can be achieved simultaneously.
To work interdependently, synergistically, proactively, and purposefully to achieve a shared goal, human members and autonomous agents in HAT regulate actions based on coordinative processes (e.g., communication) as well as cognitive and motivational-affective states (e.g., situation awareness, system knowledge, or system trust). Psychological models and research on HHT offer thoroughly researched taxonomies between team variables to explain and predict both dysfunctional and functional cooperation and coordination (Ellwart, 2011; Mathieu et al., 2008). These models of HHT have been transferred to human–machine interactions (e.g., Ellwart & Kluge, 2019; You & Robert, 2017) pointing out several key variables that are of high relevance also in the maritime context. The models show that functional HAT must be considered from a task-specific perspective in the maritime sector, balancing key perspectives on the human (e.g., human team members’ knowledge, skills, and personality), technical (e.g., features of the autonomous multi-robot system), and organizational sides (e.g., legal regulation or maritime culture, see Fig. 1)
In the maritime context, the inspection and maintenance of large vessels such as bulk carriers is an important pillar of maritime services. Thousands of medium to large ships pass across world seas. To date, ship inspection and maintenance is a manual field of work but the introduction of autonomous systems (i.e., autonomous robotic systems, intelligent software agents, etc.) offers benefits for human safety (e.g., reduced work accidents), the economy (e.g., time and cost-efficient services), and the environment (e.g., reduced full consumption). The World Maritime University (2019) highlights the automation potential of inspection drones, repair robots, or condition-based maintenance systems, and emphasizes that advanced user interfaces will provide a whole new user experience.
In multiple interdisciplinary research projects (EU projects with focus on maritime autonomy concerning ships and ports, e.g., BUGWRIGHT2, 2020; RAPID, 2020; ROBINS, 2020) researchers and practitioners collaborate to unleash the full potential of such novel technologies. Among these, the EU project BUGWRIGHT2 Autonomous Robotic Inspection and Maintenance on Ship Hulls aims at developing an autonomous multi-robotic system for inspection and maintenance on ship hulls combining diverse autonomous technologies including aerial drones, magnetic-wheeled crawlers, and underwater drones as well as virtual reality and augmented reality in the user interfaces (see Fig. 1).
The present chapter has two central aims. First, we underline the benefits of combining psychological models, system engineering, and end-user perspectives to develop and introduce functional HAT in the maritime sector. Therefore, Sect. 2 elaborates on three psychological perspectives that are crucial for evaluating functional HAT and mirror the perspectives of system developers and end-users concerning these concepts in the specific application task of BUGWRIGHT2 (maritime voices). Second, we aim to reflect on future developments of HAT in maritime services. Therefore, Sect. 3 elaborates on the adaptability of HAT configurations and poses questions for designing the next generation of autonomous maritime technology.
2 Psychological Perspectives on Human-Autonomy Teaming in Ship Inspections
The implementation of HAT including multi-robot systems in ship inspection and maintenance will transform HHT into HAT. Research in work psychology and human factors outlines numerous interdependent factors that are relevant for functional cooperation in HAT (Ellwart & Schauffel, 2021; O’Neill et al., 2022; You & Robert, 2017). The chapter can only address a narrow selection of critical factors (see also Schauffel et al., 2022). We focus on three psychological perspectives that received profound scientific attention (e.g., meta-analyses, reviews) and were reflected as critical for HAT in the special context of ship inspection and maintenance across stakeholders: (1) the level of autonomy (LOA), (2) system trust, and (3) system knowledge and features.
The critical factors are reflected in the light of the specific maritime application within the project BUGWRIGHT2. The often-abstract theoretical concepts of research in work psychology take on a special significance against the background of voices from the perspective of concrete application and feasibility in the maritime sector. This not only highlights the ecological validity of the theoretical concepts, but also the need to involve end-users and developers in close exchange during the development of systems. Therefore, an extensive interview series was conducted to reflect on the potential needs, opportunities, and challenges of HAT in ship inspection and their consequences for end-user acceptance. Relevant maritime stakeholders (Johansson et al., 2021; Pastra et al., 2022) participated in the interview series (e.g., shipyards, service suppliers, shipowners, and ship inspectors). In line with theoretical models of technology acceptance (Venkatesh et al., 2016) that have been successfully applied to the context of HAT (e.g., Bröhl et al., 2019) and models of human-centered system design (Karltun et al., 2017), the results from 23 expert interviews point to multiple critical factors of HAT that holistically touch the human element, technological systems involved, and organizational context of maritime inspection and maintenance (see Fig. 1, thin arrows). Participation in the interview study was voluntary. Withdrawal was possible at any given time of the interview without consequences. The interviews were conducted video-supported and lasted in total 1 h each. Interview statements were documented and clustered qualitatively. The present chapter documents excerpts from the interview results. For further details on the interview methods and results see the official project homepage (BUGWRIGHT2, 2020).
Table 1 summarizes the psychological perspectives, supportive interview statements from maritime stakeholders (maritime voices), and empirical evidence from the field of work psychology and human factors that we focused on in this chapter. In detail, each perspective is discussed in the following paragraphs. Quotes included in the paragraphs refer to interviewees’ comments, which are compiled in Table 1 for an overview.
2.1 Level of Autonomy
Conceptualization. The level of autonomy (LOA) refers to the degree of system autonomy in HAT ranging from no autonomy (i.e., manual human control), and semi-autonomy (i.e., no system independence in task realization, the human can veto) to full autonomy (i.e., high system independence, the human is at most informed). The LOA is differentiated by four specific task types to refrain from abstraction (Parasuraman, 2000; Parasuraman et al., 2000): information acquisition, information analysis, decision selection, and action implementation. Each task type can be realized on each LOA. In addition to a task-specific evaluation of LOA, Schiaretti et al. (2017) highlight that concerning maritime autonomy each technological subsystem must be evaluated separately regarding the LOA. Exemplarily for a multi-robot system in ship inspection and maintenance, magnetic-wheeled crawlers, aerial drones, and underwater drones have different LOAs that might vary depending on the specific subtask (e.g., monitoring the steel plate thickness, generating options for the mission paths, or executing additional thickness measurements or visual inspections of critical hull areas based on former inspection reports).
Different LOAs have unique consequences for human operators and HAT performance. Multiple strategies exist to allocate a task to a human or an autonomous agent in HAT (Rauterberg et al., 1993). Their functionality or dysfunctionality for HAT can be evaluated based on well-established criteria of functional teamwork and human-centered work design (e.g., DIN EN ISO 9241–2, Klonek & Parker, 2021; Wäfler et al., 2003). Thereby, one needs to consider that high LOA does not necessarily result in human benefits (e.g., reduced cognitive load, monotony, or stress) but may also correspond to dysfunctional outcomes, highlighting the two-sidedness of high LOA concerning situational awareness and human control. The dilemma is that high LOA combined with high system reliability and robustness results in decreased situational awareness and the limited ability of the human operator to resume control in critical situations (i.e., automation conundrum, Endsley, 2017).
Maritime Voices and Concluding Proposition. Reflecting on the LOA (see Table 1), in line with theory and research, maritime experts highlighted the task-specificity of LOA (“We need different LOAs for different tasks”) when anticipating HAT in ship inspections. The process of ship inspection is a highly complex multi-phase process including preparation, operation, and reporting phases (Pastra et al., 2022). Referring to the task types by Parasuraman et al. (2000), maritime experts formulated the clear need for human decision-making for example when deciding on the to-be-inspected areas of the ships and the final evaluation of the results (i.e., seaworthiness certificate), challenging the allocation of responsibilities and decision rights within HAT. Also, the technology-specific focus on LOA was mentioned by maritime experts including clear anticipations of a rather high LOA for the magnetic-wheeled crawlers and lower levels for the aerial drones. In addition, it becomes clear that LOA is not static but a dynamic element of HAT, as constant technological development and team habituation might lead to flexible adaptation of a specific LOA. For example, “the LOA underwater is not clear at the moment,” considering current technological challenges regarding video streaming and localization underwater. Furthermore, maritime experts say that “humans need a feeling of control over the swarm teams,” thereby referring to humans’ basic needs. Humans have an inherent and fundamental need for control and autonomy (Deci & Ryan, 1985, 2000). However, the concept of LOA adopts a strong focus on system autonomy. The higher the system LOA the lower the control and autonomy of the human interacting with the technical systems. Large amounts of research from work psychology elaborated on the crucial role of human autonomy (i.e., control) in performance, individual well-being, and motivation (Deci & Ryan, 1985; Hackman & Oldham, 1976; Olafsen et al., 2018). It has to be the goal of HAT design to balance technical LOA and human control. Humans’ basic need for autonomy must not conflict with system autonomy. In addition, stakeholder statements indicate that the LOA serves functional HAT if LOA is high enough to enable parallel work and the optimization of existing work processes (“We need LOAs that allow people to do separate tasks at the same time as robots are inspecting the ship”).
Taken together, empirical evidence and stakeholder comments illustrate that LOA can serve functional HAT in ship inspection when agent autonomy and human control are constantly balanced on a task- and technology-specific level. There is no simple all-or-nothing principle, but LOA must be balanced and adaptable, evaluated, and designed against the background of the task at hand.
2.2 System Trust
Conceptualization. System trust describes the willingness to depend on technology due to its characteristics (McKnight et al., 2011). In the context of maritime HAT, the object of interdependence is multifaceted including heterogeneous robotic technologies (e.g., magnetic-wheeled crawlers, underwater drones). System trust depends on multiple factors that are rooted in the technology, human, task, and organizational context (see Hancock et al., 2011). For maritime applications, following Pastra et al. (2022), technical robustness and safety, data governance and regulation, and policies are the most vital elements of system trust. However, the authors emphasize that depending on the human element (e.g., skills), the specific vessel (e.g., age or type), and situational environmental conditions (e.g., in-water visibility) system trust might differ. Thus, system trust is not static but dynamic and develops over time. First- and second-hand experiences impact trust dynamics, and also dispositional aspects (i.e., ability to trust) are powerful for system trust in HAT, especially within the early stages of technology adoption (Hoff & Bashir, 2015). Subjective competence comparisons between a human and an autonomous agent impact system trust (Ellwart et al., 2022), given that humans have a basic drive to compare themselves with others in a group or a team (Festinger, 1954). Regarding the optimal level of system trust, not the highest but a well-calibrated level of system trust is requested, as both mistrust and overtrust are associated with performance reduction (Parasuraman & Manzey, 2010).
Maritime Voices and Concluding Proposition. Reflecting on system trust (see Table 1), maritime experts highlight that the maritime context might be a special challenge for HAT, stating that ship inspection and maintenance “is a traditional field of work, with high rigidity, low technology trust, and high skepticism.” Maritime HAT thus requests a paradigm shift and cultural change. High end-user participation might enhance such cultural change and establish system trust in maritime autonomy but the timing of end-user participation is focal. Especially early robot failures lower trust (Desai et al., 2013). Therefore “there is the risk of testing too early or too late.” Referring to the aspect of trust calibration (i.e., not too high nor too low system trust), end-users “need realistic expectations about robot features” including “a clear understanding of what the system can do and cannot do, with precise examples in terms of autonomous navigation and positioning.” Such mental models of HAT help humans to calibrate trust appropriately in routine and especially non-routine tasks. Of note, the consideration of system trust only falls short when discussing HAT in ship inspection, as multiple human stakeholders will remain active in the inspection process. Thus, interpersonal trust will remain focal alongside system trust. In addition, high LOA of single technologies requests a discussion on inter-robot trust which further complicates the topic of trust in maritime HAT.
Taken together, well-calibrated system trust that considers human uniqueness, as well as autonomy’s strengths and limitations, serves functional HAT whereas both over-and mistrust reduce HAT functionality. Thereby system trust is subjective and dynamic, developing over time with different trust levels for routine or non-routine situations.
2.3 System Knowledge and Features
Conceptualization. System knowledge is a key aspect of functional HAT and describes “the human’s understanding of the general system logic, its processes, capabilities, and limitations” (Rieth & Hagemann, 2021a, p. 5). In the context of maritime autonomy, two domains of system knowledge should be distinguished. First, short-term system knowledge refers to transparent communication and situation awareness in HAT. Here, interface design can help to achieve a constant level of high situational awareness and foster agent transparency (see Schauffel et al., 2022). Numerous research in human factors and work psychology highlights the importance of agent transparency or situational awareness as a crucial knowledge domain for system trust, adaptation, and coordination (Chen et al., 2018). Second, a long-term perspective on system knowledge refers to knowledge about system features, (team) goals, roles, and tasks. Different than situational awareness, long-term knowledge integrates the operators’ understanding of tasks, roles, goals, and work processes from administrative guidelines with learned experiences from operations. Here, for example, high reliability during operation is vital, referring to the accurate functioning of autonomy over time and the reproducibility of the tests performed (Pastra et al., 2022). Moreover, accurate mental models of HAT tasks, roles, and responsibilities help to establish well-calibrated system trust and guarantee appropriate human competences (e.g., by training or certification), as human competence demands will increase in HAT (Rieth & Hagemann, 2021b). Crucial for the development of situative knowledge and long-term mental models is communication between the system and the human operator. Communication helps to understand the current decisions of the system and integrate the experience into long-term mental models.
Maritime Voices and Concluding Proposition. Reflections from maritime experts support that high reliability (“Robots need to be sensitive to the ship structure with the reliability of 100 out of 100”) in combination with precise examples of robot strengths and limitations is strongly needed for functional HAT. It becomes evident that end-user participation reveals concrete technological elements that need to be considered in robot design (e.g., safe mode, proximity sensor, see Table 1). Maritime experts note that aspects of communication between human and autonomous entities in HAT are so far open questions. Communication needs to be two-sided meaning that humans can intervene in robot missions (“The human end-user has to be able to interfere if he decides to do so, based on his long-year experiences or intuition”) and autonomous technologies can contact humans actively in case of critical situations (“The robot should be able to give a warning sign to the human user”). System knowledge also refers to new roles and tasks that go along with the implementation of a multi-robot system (e.g., drone driving, robot calibration, see Table 1).
Taken together, functional HAT requires accurate knowledge about ongoing team processes plus knowledge about robot features as well as subsequent consequences for human competences, roles, and responsibilities.
3 Envisioning the Next Generation of Maritime Human-Autonomy Teaming
Looking at current developments of maritime robotic systems, as described above in the BUGWRIGHT2 example, it is noticeable that although the technical solutions include a certain degree of autonomy, it cannot yet be assumed that the systems are fully self-governed while operating in complex tasks. Visions of highly autonomous systems are being researched and developed. Here, autonomous robots take over complex activities and work interdependently with humans. The factors described above (i.e., system trust, LOA, and system knowledge) remain relevant for functional HAT in the next generation of maritime autonomy that includes fully autonomous systems but these factors are supplemented by a factor that is critical for self-governed systems: team adaptability. Adaptability means that systems can detect changes in the environment and select alternative courses of action that fit new situations. Adaptability in complex environments such as maritime inspections must be described and designed on different levels: (1) reactive adaptability, (2) reflective adaptability, and (3) long-term applicability and strategic adaptability.
Reactive Adaptability. A reactive level of adaptability means that a system comprising of humans and robots recognizes changing requirements and situations during task operation and can adjust behavior. In work psychology, Rico et al. (2019) speak of adaptation through implicit coordination during task action when team members anticipate the information or behavior needed in a given situation and react “automatically.” The prerequisite for this is that the autonomous technical system and human operator both have valid situational awareness to detect changes and possess appropriate knowledge of how to react in the given situation. As a result, there is no explicit command necessary, because the team of humans and autonomous agents “knows” about alternative action plans in certain situations or anticipates human needs. For example, in a maritime context, robots should recognize and avoid obstacles or be programed to communicate new undefinable sensory inputs to the operator without being asked. From a research perspective, there are a few empirical papers on this type of adaptability, mostly in the context of aviation and pilot teams with human and software agents. For example, Johnson et al. (2021) showed that coordination training between software agents and human pilots led to better adaptation in critical situations through higher communication anticipation. Brand and Schulte (2021) developed a workload-adaptive and task-specific cognitive agent for helicopter crews that adjusted support by identifying task situations and the workload of the crew. Liu et al. (2016) showed in a human–robot interaction that participants were highly sensitive to the anticipative adaptation of a robot while interacting with a human. Robots that adapted to human actions over time were preferred to work with over non-adaptive ones.
Reflective Adaptability. A reflective level of adaptability means that humans and robots can reflect on task performance after an action period, evaluate performance feedback, and (re-)plan subsequent action phase behavior. In work psychology, Rico et al. (2019) speak of adaptation through explicit coordination during a transition phase (i.e., between two action phases). Successful adaptation during transitions relies on a valid and shared situation awareness that feeds back functional and dysfunctional performance from the action phase. Moreover, successful adaptation in transition relies on explicit communication to reflect on prior achievements and plan future tactics (Ellwart et al., 2015). This level of adaptation places high interaction-related demands on HAT. On one side, sensors and user interfaces have to support human-autonomy reflection and on the other side, the systems software must be able to handle such tactical adjustments. For the maritime context, for example, humans would evaluate robot inspection performance, feedback about missing information, or mistrust of the robot which leads to adjustments in subsequent inspection phases. Probably because of the technical challenges, there is little research about reflective adaptation in HAT. Kox et al. (2021) investigated trust repair strategies between robots and humans during transition phases. When the robot failed its job the system feeds back expressions of regret and explanations, which resulted in high trust repair.
One type of reactive or reflective adaptation is the concept of adaptive LOA. This means that formerly autonomous actions of the robot become manually controlled (or vice versa) depending on the task or team characteristics. HAT may adapt the LOA of the robot or software agents depending on system errors (Chavaillaz et al., 2016) or the workload of the human (Calhoun et al., 2011). Adaptive LOA may be implemented automatically during action (i.e., reactive) or after task reflection on demand by the human team member. In this vein, the concept of socio-digital self-comparisons may be relevant for future research. When humans compare their task-related competences with robots, Ellwart et al. (2022) found that perceived advantages of robot competences (compared to own individual competences) were related to task allocation toward the robot. Thus, adaptive LOA may also impact the evaluation of own and robot competences in a given situation.
Long-term Applicability and Strategic Adaptability. While reactive and reflective adaptation focus on short-term adjustments of HAT during a given sequence of action and transition phases, there is a long-term perspective on the applicability and adaptability of HAT. Field interviews in the maritime sector of ship inspections within the BUGWRIGHT2 project pointed toward long-term issues that are closely related to user acceptance and knowledge needs before implementation. For example, inspectors of ship hulls asked if the autonomous system can be used sustainably for a long time without any loss in quality and performance. This relates to technical reliability after years of application but also to the question if the system will fit the demands of the future. Thus, systems need to strategically adapt to new changing conditions, such as new ship types, inspection or software regulations as well as new workflows. To successfully implement these adaptations, close cooperation between members of HAT and system developers is required not only in the phase of technology introduction but also in the long term over the life cycle of the HAT.
4 Conclusion
From a psychological perspective, the collaboration between humans and self-governing systems can be described as a complex interaction of numerous factors at the level of human, technology, and organization. The robot must no longer be just a tool, but an autonomous team member in HAT. The resulting requirements for the design of maritime HAT can be developed in an interdisciplinary collaboration between work psychologists, system developers, and end-users in a participatory manner. Yet, there is no optimal design solution. In this context, well-researched interaction processes, as well as cognitive and emotional states of psychological models, can provide a frame of reference to design functional and adaptive systems. Thereby, the specific task must be at the center of system design. It makes a difference if robots gather data for ship hull inspections autonomously and give this information to a human inspector for decision or if robots gather data and decide about the seaworthiness of the ship and the hull’s safety autonomously. The optimal design solution is always bound to the specific task and thus opens up a wide range of application perspectives for HAT in the maritime sector.
Bibliography
Brand, Y., & Schulte, A. (2021). Workload-adaptive and task-specific support for cockpit crews: Design and evaluation of an adaptive associate system. Human-Intelligent Systems Integration, 3(2), 187–199. https://doi.org/10.1007/s42454-020-00018-8
Bröhl, C., Nelles, J., Brandl, C., Mertens, A., & Nitsch, V. (2019). Human–robot collaboration acceptance model: Development and comparison for Germany, Japan, China and the USA. International Journal of Social Robotics, 11(5), 709–726. https://doi.org/10.1007/s12369-019-00593-0
BUGWRIGHT2. (2020). https://www.bugwright2.eu/ (Accessed 16 May 2022).
Calhoun, G. L., Ward, V. B. R., & Ruff, H. A. (2011). Performance-based adaptive automation for supervisory control. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 55(1), 2059–2063. https://doi.org/10.1177/1071181311551429
Chavaillaz, A., Wastell, D., & Sauer, J. (2016). System reliability, performance and trust in adaptable automation. Applied Ergonomics, 52, 333–342. https://doi.org/10.1016/j.apergo.2015.07.012
Chen, J. Y. C., Barnes, M. J., & Harper-Sciarini, M. (2011). Supervisory control of multiple robots: Human-performance issues and user-interface design. IEEE Transactions on Systems, Man, and Cybernetics - Part C: Applications and Reviews, 41(4), 435–454. https://doi.org/10.1109/TSMCC.2010.2056682
Chen, J. Y. C., Lakhmani, S. G., Stowers, K., Selkowitz, A. R., Wright, J. L., & Barnes, M. (2018). Situation awareness-based agent transparency and human-autonomy teaming effectiveness. Theoretical Issues in Ergonomics Science, 19(3), 259–282. https://doi.org/10.1080/1463922X.2017.1315750
Deci, E. L., & Ryan, R. M. (1985). Cognitive evaluation theory. In E. L. Deci & R. M. Ryan (Eds), Intrinsic motivation and self-determination in human behavior (pp. 43–85). Springer Science+Business Media. https://doi.org/10.1007/978-1-4899-2271-7_3
Deci, E. L., & Ryan, R. M. (2000). The “what” and “why” of goal pursuits: Human needs and the self-determination of behavior. Psychological Inquiry, 11(4), 227–268. https://doi.org/10.1207/S15327965PLI1104_01
Desai, M., Kaniarasu, P., Medvedev, M., Steinfeld, A., & Yanco, H. (2013). Impact of robot failures and feedback on real-time trust. 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 251–258). https://doi.org/10.1109/HRI.2013.6483596
Ellwart, T. (2011). Assessing coordination in human groups: Concepts and methods. In M. Boos, M. Kolbe, P. M. Kappeler, & T. Ellwart (Eds.), Coordination in human and primate groups (pp. 119–135). Springer.
Ellwart, T., Happ, C., Gurtner, A., & Rack, O. (2015). Managing information overload in virtual teams: Effects of a structured online team adaptation on cognition and performance. European Journal of Work and Organizational Psychology, 24(5), 812–826. https://doi.org/10.1080/1359432X.2014.1000873
Ellwart, T., & Kluge, A. (2019). Psychological perspectives on intentional forgetting: An overview of concepts and literature. Künstliche Intelligenz [German Journal on Artificial Intelligence], 33(1), 79–84. https://doi.org/10.1007/s13218-018-00571-0
Ellwart, T., & Schauffel, N. (2021). Humans, software agents, and robots in hybrid teams. Effects on work, safety, and health. PsychArchives. https://doi.org/10.23668/psycharchives.5310
Ellwart, T., Schauffel, N., Antoni, C. H., & Timm, I. J. (2022). I vs. robot: Sociodigital self-comparisons in hybrid teams from a theoretical, empirical, and practical perspective. Gruppe. Interaktion. Organisation. Zeitschrift Für Angewandte Organisationspsychologie (GIO). 54. 273–284. https://doi.org/10.1007/s11612-022-00638-5
Endsley, M. R. (2017). From here to autonomy: Lessons learned from human-automation research. Human Factors, 59(1), 5–27. https://doi.org/10.1177/0018720816681350
Festinger, L. (1954). A theory of social comparison processes. Human Relations, 7(2), 117–140. https://doi.org/10.1177/001872675400700202
Hackman, J. R., & Oldham, G. R. (1976). Motivation through the design of work: Test of a theory. Organizational Behavior and Human Performance, 16, 250–279.
Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y. C., de Visser, E. J., & Parasuraman, R. (2011). A meta-analysis of factors affecting trust in human-robot interaction. Human Factors, 53(5), 517–527. https://doi.org/10.1177/0018720811417254
Hoff, K. A., & Bashir, M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors, 57(3), 407–434. https://doi.org/10.1177/0018720814547570
Johansson, T. M., Dalaklis, D., & Pastra, A. (2021). Maritime robotics and autonomous systems operations: Exploring pathways for overcoming international techno-regulatory data barriers. Journal of Marine Science and Engineering, 9(6), Article 594. https://doi.org/10.3390/jmse9060594
Johnson, C. J., Demir, M., McNeese, N. J., Gorman, J. C., Wolff, A. T., & Cooke, N. J. (2021). The impact of training on human-autonomy team communications and trust calibration. Human Factors, 187208211047323. https://doi.org/10.1177/00187208211047323
Kaber, D. B., & Endsley, M. R. (1997). Level of automation and adaptive automation effects on performance in a dynamic control task. In Proceedings of the 13th Triennial Congress of the International Ergonomics Association. Symposium conducted at the meeting of Finnish Institute of Occupational Health, Helsinki.
Kaber, D. B., & Endsley, M. R. (2004). The effects of level of automation and adaptive automation on human performance, situation awareness and workload in a dynamic control task. Theoretical Issues in Ergonomics Science, 5(2), 113–153. https://doi.org/10.1080/1463922021000054335
Karltun, A., Karltun, J., Berglund, M., & Eklund, J. (2017). Hto - A complementary ergonomics approach. Applied Ergonomics, 59(Pt A), 182–190. https://doi.org/10.1016/j.apergo.2016.08.024
Klonek, F., & Parker, S. K. (2021). Designing SMART teamwork: How work design can boost performance in virtual teams. Organizational Dynamics, 50(1), Article 100841. https://doi.org/10.1016/j.orgdyn.2021.100841
Kox, E. S., Kerstholt, J. H., Hueting, T. F., & Vries, P. W. de (2021). Trust repair in human-agent teams: The effectiveness of explanations and expressing regret. Autonomous Agents and Multi-Agent Systems, 35(2). https://doi.org/10.1007/s10458-021-09515-9
Liu, C., Hamrick, J. B., Fisac, J. F., Dragan, A. D., Hedrick, J. K., Sastry, S. S., & Griffiths, T. L. (2016). Goal inference improves objective and perceived performance in human-robot collaboration. Advance online publication. https://doi.org/10.48550/arXiv.1802.01780
Mathieu, J., Maynard, M. T., Rapp, T., & Gilson, L. (2008). Team effectiveness 1997–2007: A review of recent advancements and a glimpse into the future. Journal of Management, 34(3), 410–476. https://doi.org/10.1177/0149206308316061
McKnight, D. H., Carter, M., Thatcher, J. B., & Clay, P. F. (2011). Trust in a specific technology: An investigation of its components and measures. ACM Transactions on Management Information Systems, 2(2), 1–25. https://doi.org/10.1145/1985347.1985353
McNeese, N. J., Demir, M., Cooke, N. J., & Myers, C. (2018). Teaming with a synthetic teammate: Insights into human-autonomy teaming. Human Factors, 60(2), 262–273. https://doi.org/10.1177/0018720817743223
O’Neill, T., McNeese, N., Barron, A., & Schelble, B. (2022). Human–autonomy teaming: A review and analysis of the empirical literature. Human Factors, 64(5), 904–938. https://doi.org/10.1177/0018720820960865
Olafsen, A. H., Deci, E. L., & Halvari, H. (2018). Basic psychological needs and work motivation: A longitudinal test of directionality. Motivation and Emotion, 42(2), 178–189. https://doi.org/10.1007/s11031-017-9646-2
Onnasch, L., Wickens, C. D., Li, H., & Manzey, D. H. (2014). Human performance consequences of stages and levels of automation: An integrated meta-analysis. Human Factors, 56(3), 476–488. https://doi.org/10.1177/0018720813501549
Parasuraman, R. (2000). Designing automation for human use: Empirical studies and quantitative models. Ergonomics, 43(7), 931–951. https://doi.org/10.1080/001401300409125
Parasuraman, R., & Manzey, D. H. (2010). Complacency and bias in human use of automation: An attentional integration. Human Factors, 52(3), 381–410. https://doi.org/10.1177/0018720810376055
Parasuraman, R., Sheridan, T. B., & Wickens, C. D. (2000). A model for types and levels of human interaction with automation. IEEE Transactions on Systems, Man, and Cybernetics - Part a: Systems and Humans, 30(3), 286–297. https://doi.org/10.1109/3468.844354
Pastra, A., Schauffel, N., Ellwart, T., & Johansson, T. M. (2022). Building a trust ecosystem for remote inspection technologies in ship hull inspections. Law, Innovation and Technology, 14(2), 474–497. https://doi.org/10.1080/17579961.2022.2113666
Rauterberg, M., Strohm, O., & Ulich, E. (1993). Arbeitsorientiertes Vorgehen zur Gestaltung menschengerechter Software [Work-oriented approach to designing human-centered software]. Ergonomie & Information, 20, 7–21.
Rico, R., Gibson, C. B., Sánchez-Manzanares, M., & Clark, M. A. (2019). Building team effectiveness through adaptation: Team knowledge and implicit and explicit coordination. Organizational Psychology Review, 9(2–3), 71–98. https://doi.org/10.1177/2041386619869972
Rieth, M., & Hagemann, V. (2021a). Automation as an equal team player for humans? - A view into the field and implications for research and practice. Applied Ergonomics, 98, Article (103552). https://doi.org/10.1016/j.apergo.2021.103552
Rieth, M., & Hagemann, V. (2021b). Veränderte Kompetenzanforderungen an Mitarbeitende infolge zunehmender Automatisierung – Eine Arbeitsfeldbetrachtung [Changing competence requirements for employees as a result of increasing automation - A work field view]. Gruppe. Interaktion. Organisation. Zeitschrift Für Angewandte Organisationspsychologie (GIO), 52(1), 37–49. https://doi.org/10.1007/s11612-021-00561-1
RAPID (Risk-aware Autonomous Port Inspection Drones). (2020). https://rapid2020.eu/ (Accessed 15 May 2022).
ROBINS (Robotics technology for Inspection of Ships). (2020). https://www.robins-project.eu/ (Accessed 18 June 2022).
Schaefer, K. E., Chen, J. Y. C., Szalma, J. L., & Hancock, P. A. (2016). A meta-analysis of factors influencing the development of trust in automation: Implications for understanding autonomy in future systems. Human Factors, 58(3), 377–400. https://doi.org/10.1177/0018720816634228
Schauffel, N., Gründling, J., Ewerz, B., Weyers, B., & Ellwart, T. (2022). Human-Robot Teams. Spotlight on Psychological Acceptance Factors exemplified within the BUGWRIGHT2 Project. PsychArchives. https://doi.org/10.23668/psycharchives.5584
Schiaretti, M., Chen, L., & Negenborn, R. R. (2017). Survey on autonomous surface vessels: Part I - A new detailed definition of autonomy levels. In Bektaş, T., Coniglio, S., Martinez-Sykora, A., & Voß, S. (Eds.), Lecture Notes in Computer Science. Computational logistics, 10572, 219–233. Springer International Publishing. https://doi.org/10.1007/978-3-319-68496-3_15
Sheridan, T. B. (2016). Human-robot interaction: Status and challenges. Human Factors, 58(4), 525–532. https://doi.org/10.1177/0018720816644364
Straube, S., & Schwartz, T. (2016). Hybride Teams in der digitalen Vernetzung der Zukunft: Mensch-Roboter-Kollaboration [Hybrid teams in the digital networking of the future: human-robot collaboration]. Industrie 4.0 Management, 32, 41–45.
Venkatesh, V., Thong, J., & Xu, X. (2016). Unified theory of acceptance and use of technology: A synthesis and the road ahead. Journal of the Association for Information Systems, 17(5), 328–376. https://doi.org/10.17705/1jais.00428
Wäfler, T., Grote, G., Windischer, A., & Ryser, C. (2003). KOMPASS: A method for complementary system design. In E. Hollnagel (Ed.), Handbook of cognitive task design (pp. 477–502). Lawrence Erlbaum Associated Publishers. https://doi.org/10.1201/9781410607775.ch20
World Maritime University. (2019). Transport 2040 Automation Technology Employment: The future of work. https://doi.org/10.21677/itf.20190104
You, S., & Robert, L. P. (2017). Teaming up with robots: An IMOI (inputs-mediators-outputs-inputs) framework of human-robot teamwork. International Journal of Robotic Engineering, 2(1), 1–7. https://doi.org/10.35840/2631-5106/4103
Zhou, J., Zhu, H., Kim, M., & Cummings, M. L. (2019). The impact of different levels of autonomy and training on operators’ drone control strategies. ACM Transactions on Human-Robot Interaction, 8(4), 1–15. https://doi.org/10.1145/3344276
Funding
Research funded by the European Union’s Horizon 2020 research and innovation programme under Grant Agreement No. 871260.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Ellwart, T., Schauffel, N. (2023). Human-Autonomy Teaming in Ship Inspection: Psychological Perspectives on the Collaboration Between Humans and Self-Governing Systems. In: Johansson, T.M., Dalaklis, D., Fernández, J.E., Pastra, A., Lennan, M. (eds) Smart Ports and Robotic Systems . Studies in National Governance and Emerging Technologies. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-031-25296-9_18
Download citation
DOI: https://doi.org/10.1007/978-3-031-25296-9_18
Published:
Publisher Name: Palgrave Macmillan, Cham
Print ISBN: 978-3-031-25295-2
Online ISBN: 978-3-031-25296-9
eBook Packages: Political Science and International StudiesPolitical Science and International Studies (R0)