TY - JOUR
T1 - Keep talking and nobody decides How can AI augment users' ability to detect misinformation while balancing engagement and workload?
AU - Mortaga, Maged
AU - Sieger, Marvin
AU - Kojan, Lilian
AU - Nunner, Hendrik
AU - Stellbrink, Leonard
AU - Valdez, André Calero
AU - Schrills, Tim
N1 - Publisher Copyright:
© 2025 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
PY - 2025
Y1 - 2025
N2 - To detect misinformation, users of social networks potentially utilize AI-based decision support systems (DSS). However, a DSS's ability to augment user behavior depends on how a DSS modifies users' decision-making and interaction experience. We examined how users' performance and experience are affected by the level of automation of a DSS in misinformation detection. In a preregistered within-subjects-experiment with an AI, N=99 participants interacted with two DSS in a simulated environment. The first provided distinct recommendations (higher level of automation), while the second provided solely evaluative support (lower level of automation). We compared their effect on user behavior (here: accuracy, interaction frequency) and experience (here: trust, traceability). Participants showed higher accuracy when receiving recommendations but also interacted less frequently. Trust and perceived traceability did not differ between systems. We discuss whether more intensive processing of the evaluated information could be responsible for the higher number of errors in the evaluative system.
AB - To detect misinformation, users of social networks potentially utilize AI-based decision support systems (DSS). However, a DSS's ability to augment user behavior depends on how a DSS modifies users' decision-making and interaction experience. We examined how users' performance and experience are affected by the level of automation of a DSS in misinformation detection. In a preregistered within-subjects-experiment with an AI, N=99 participants interacted with two DSS in a simulated environment. The first provided distinct recommendations (higher level of automation), while the second provided solely evaluative support (lower level of automation). We compared their effect on user behavior (here: accuracy, interaction frequency) and experience (here: trust, traceability). Participants showed higher accuracy when receiving recommendations but also interacted less frequently. Trust and perceived traceability did not differ between systems. We discuss whether more intensive processing of the evaluated information could be responsible for the higher number of errors in the evaluative system.
M3 - Conference Articles in Journals
SN - 1613-0073
JO - CEUR Workshop Proceedings
JF - CEUR Workshop Proceedings
ER -