Color for characters - Effects of visual explanations of AI on trust and observability

Tim Schrills*, Thomas Franke

*Corresponding author for this work


The present study investigates the effects of prototypical visualization approaches aimed at increasing the explainability of machine learning systems in regard to perceived trustworthiness and observability. As the amount of processes automated by artificial intelligence (AI) increases, so does the need to investigate users’ perception. Previous research on explainable AI (XAI) tends to focus on technological optimization. The limited amount of empirical user research leaves key questions unanswered, such as which XAI designs actually improve perceived trustworthiness and observability. We assessed three different visual explanation approaches, consisting of either only a table with classification scores used for classification, or, additionally, one of two different backtraced visual explanations. In a within-subjects design with N = 83 we examined the effects on trust and observability in an online experiment. While observability benefitted from visual explanations, information-rich explanations also led to decreased trust. Explanations can support human-AI interaction, but differentiated effects on trust and observability have to be expected. The suitability of different explanatory approaches for individual AI applications should be further examined to ensure a high level of trust and observability in e.g. automated image processing.

Original languageEnglish
Title of host publicationHCII 2020: Artificial Intelligence in HCI
EditorsHelmut Degen, Lauren Reinerman-Jones
Number of pages15
Volume12217 LNCS
PublisherSpringer, Cham
Publication date10.07.2020
ISBN (Print)978-3-030-50333-8
ISBN (Electronic)978-3-030-50334-5
Publication statusPublished - 10.07.2020
Event22nd International Conference on Human-Computer Interaction - Copenhagen, Denmark
Duration: 19.07.202024.07.2020


Dive into the research topics of 'Color for characters - Effects of visual explanations of AI on trust and observability'. Together they form a unique fingerprint.

Cite this