Nothing Special   »   [go: up one dir, main page]

Explainable Autonomy: A Study of Explanation Styles for Building Clear Mental Models

Francisco Javier Chiyah Garcia, David A. Robb, Xingkun Liu, Atanas Laskov, Pedro Patron, Helen Hastie


Abstract
As unmanned vehicles become more autonomous, it is important to maintain a high level of transparency regarding their behaviour and how they operate. This is particularly important in remote locations where they cannot be directly observed. Here, we describe a method for generating explanations in natural language of autonomous system behaviour and reasoning. Our method involves deriving an interpretable model of autonomy through having an expert ‘speak aloud’ and providing various levels of detail based on this model. Through an online evaluation study with operators, we show it is best to generate explanations with multiple possible reasons but tersely worded. This work has implications for designing interfaces for autonomy as well as for explainable AI and operator training.
Anthology ID:
W18-6511
Volume:
Proceedings of the 11th International Conference on Natural Language Generation
Month:
November
Year:
2018
Address:
Tilburg University, The Netherlands
Editors:
Emiel Krahmer, Albert Gatt, Martijn Goudbeek
Venue:
INLG
SIG:
SIGGEN
Publisher:
Association for Computational Linguistics
Note:
Pages:
99–108
Language:
URL:
https://aclanthology.org/W18-6511
DOI:
10.18653/v1/W18-6511
Bibkey:
Cite (ACL):
Francisco Javier Chiyah Garcia, David A. Robb, Xingkun Liu, Atanas Laskov, Pedro Patron, and Helen Hastie. 2018. Explainable Autonomy: A Study of Explanation Styles for Building Clear Mental Models. In Proceedings of the 11th International Conference on Natural Language Generation, pages 99–108, Tilburg University, The Netherlands. Association for Computational Linguistics.
Cite (Informal):
Explainable Autonomy: A Study of Explanation Styles for Building Clear Mental Models (Chiyah Garcia et al., INLG 2018)
Copy Citation:
PDF:
https://aclanthology.org/W18-6511.pdf