Abstract
The present study investigates the effects of prototypical visualization approaches aimed at increasing the explainability of machine learning systems in regard to perceived trustworthiness and observability. As the amount of processes automated by artificial intelligence (AI) increases, so does the need to investigate users’ perception. Previous research on explainable AI (XAI) tends to focus on technological optimization. The limited amount of empirical user research leaves key questions unanswered, such as which XAI designs actually improve perceived trustworthiness and observability. We assessed three different visual explanation approaches, consisting of either only a table with classification scores used for classification, or, additionally, one of two different backtraced visual explanations. In a within-subjects design with N = 83 we examined the effects on trust and observability in an online experiment. While observability benefitted from visual explanations, information-rich explanations also led to decreased trust. Explanations can support human-AI interaction, but differentiated effects on trust and observability have to be expected. The suitability of different explanatory approaches for individual AI applications should be further examined to ensure a high level of trust and observability in e.g. automated image processing.
Original language | English |
---|---|
Title of host publication | HCII 2020: Artificial Intelligence in HCI |
Editors | Helmut Degen, Lauren Reinerman-Jones |
Number of pages | 15 |
Volume | 12217 LNCS |
Publisher | Springer, Cham |
Publication date | 10.07.2020 |
Pages | 121-135 |
ISBN (Print) | 978-3-030-50333-8 |
ISBN (Electronic) | 978-3-030-50334-5 |
DOIs | |
Publication status | Published - 10.07.2020 |
Event | 22nd International Conference on Human-Computer Interaction - Copenhagen, Denmark Duration: 19.07.2020 → 24.07.2020 |