Abstract
Improving agent capabilities and increasing availability of computing platforms and internet connectivity allows more effective and diverse collaboration between human users and automated agents. To increase the viability and effectiveness of human–agent collaborative teams, there is a pressing need for research enabling such teams to maximally leverage relative strengths of human and automated reasoners. We study virtual and ad hoc teams, comprising a human and an agent, collaborating over a few episodes where each episode requires them to complete a set of tasks chosen from given task types. Team members are initially unaware of their partner’s capabilities, and the agent, acting as the task allocator, must adapt the allocation process to maximize team performance. The focus of the current paper is on analyzing how allocation decision explanations can affect both user performance and the human workers’ outlook, including factors, such as motivation and satisfaction. We investigate the effect of explanations provided by the agent allocator to the human on performance and key factors reported by the human teammate on surveys. Survey factors include the effect of explanations on motivation, explanatory power, and understandability, as well as satisfaction with and trust/confidence in the teammate. We evaluated a set of hypotheses on these factors related to positive, negative, and no-explanation scenarios through experiments conducted with MTurk workers.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Abdulrahman, A., Richards, D., Bilgin, A.A.: Reason explanation for encouraging behaviour change intention. In: Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems, 68–77 (2021)
Anderson, A., Kleinberg, J., Mullainathan, S.: Assessing human error against a benchmark of perfection. ACM Trans. Knowl. Discov. Data (TKDD) 11(4), 1–25 (2017)
Athey, S.C., Bryan, K.A., Gans, J.S.: The allocation of decision authority to human and artificial intelligence. In AEA Papers and Proc 110, 80–84 (2020)
Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., Herrera, F.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fus. 58, 82–115 (2020)
Brinkman, W.-P.: Design of a questionnaire instrument. In: Handbook of mobile technology research methods, 31–57. Nova Publishers (2009)
Ehsan, U., Tambwekar, P., Chan, L., Harrison, B., Riedl, M.O.: Automated rationale generation: a technique for explainable AI and its effects on human perceptions. CoRR, arXiv:1901.03729 (2019a)
Ehsan, U., Tambwekar, P., Chan, L., Harrison, B., Riedl, M.O.: Automated rationale generation: a technique for explainable AI and its effects on human perceptions. In: Proceedings of the 24th International Conference on Intelligent User Interfaces, 263–274 (2019b)
Genter, K., Agmon, N., Stone, P.: ???? Role-based Ad hoc teamwork. In: Proceedings of the Plan, Activity, and Intent Recognition Workshop at the Twenty-Fifth Conference on Artificial Intelligence (PAIR-11)
Gervits, F., Thurston, D., Thielstrom, R., Fong, T., Pham, Q., Scheutz, M.: Toward genuine robot teammates: improving human-robot team performance using robot shared mental models. In: AAMAS, 429–437 (2020)
Hauser, D., Paolacci, G., Chandler, J.: Common concerns with MTurk as a participant pool: evidence and solutions (2019)
Hayes-Roth, B.: A blackboard architecture for control. Artif. Intell. 26(3), 251–321 (1985)
Kahneman, D.: Thinking, fast and slow. Macmillan (2011)
Kim, J., Muise, C., Shah, A., Agarwal, S., Shah, J.: ???? Bayesian inference of linear temporal logic specifications for contrastive explanations
Lai, V., Tan, C.: On human predictions with explanations and predictions of machine learning models: a case study on deception detection. In: Proceedings of the Conference on Fairness, Accountability, and Transparency, 29–38 (2019)
Lin, R., Gal, Y., Kraus, S., Mazliah, Y.: Training with automated agents improves peoples behavior in negotiation and coordination tasks. Decis. Support Syst. (DSS) 60, 1–9 (2014)
Mathieu, J.E., Hollenbeck, J.R., van Knippenberg, D., Ilgen, D.R.: A century of work teams in the journal of applied psychology. J. Appl. Psychol. 102(3), 452 (2017)
Miller, T.: Explanation in artificial intelligence: Insights from the social sciences. Artif. Intell. 267, 1–38 (2019)
Mosteo, A.R., Montano, L.: A survey of multi-robot task allocation. Instituto de Investigacin en Ingenierła de Aragn (I3A), Tech. Rep (2010)
Mualla, Y., Tchappi, I., Kampik, T., Najjar, A., Calvaresi, D., Abbas-Turki, A., Galland, S., Nicolle, C.: The quest of parsimonious XAI: a human-agent architecture for explanation formulation. Artif. Intell. 302, 103573 (2022)
Neerincx, M.A., van der Waa, J., Kaptein, F., van Diggelen, J.: Using perceptual and cognitive explanations for enhanced human-agent team performance. In: Harris, D. (ed.) Engineering Psychology and Cognitive Ergonomics, pp. 204–214. Springer International Publishing, Cham (2018)
Puranam, P., Alexy, O., Reitzig, M.: What’s new about new forms of organizing? Acad. Manag. Rev. 39(2), 162–180 (2014)
Ramchurn, S.D., Huynh, T.D., Ikuno, Y., Flann, J., Wu, F., Moreau, L., Jennings, N.R., Fischer, J.E., Jiang, W., Rodden, T., Simpson, E., Reece, S., Roberts, S.J.: HAC-ER: a disaster response system based on human-agent collectives. In: Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems, 533–541. Richland, SC: International Foundation for Autonomous Agents and Multiagent Systems (2015)
Robinette, P., Wagner, A.R., Howard, A.M.: Building and maintaining trust between humans and guidance robots in an emergency. In: AAAI Spring Symposium: Trust and Autonomous Systems, 78–83. Stanford, CA (2013)
Rosenfeld, A., Agmon, N., Maksimov, O., Kraus, S.: Intelligent agent supporting human-multi-robot team collaboration. Artif. Intell. 252, 211–231 (2017)
Sanchez, R.P., Bartel, C.M., Brown, E., DeRosier, M.: The acceptability and efficacy of an intelligent social tutoring system. Comput. Educ. 78, 321–332 (2014)
Shin, D.: The effects of explainability and causability on perception, trust, and acceptance: implications for explainable AI. Int. J. Hum. Comput. Stud. 146, 102551 (2021)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Lavender, B., Abuhaimed, S. & Sen, S. Positive and negative explanation effects in human–agent teams. AI Ethics 4, 47–56 (2024). https://doi.org/10.1007/s43681-023-00396-0
Published:
Issue Date:
DOI: https://doi.org/10.1007/s43681-023-00396-0