On the Ethical and Epistemological Utility of Explicable AI in Medicine

Christian Herzog*

*Corresponding author for this work
3 Citations (Scopus)

Abstract

In this article, I will argue in favor of both the ethical and epistemological utility of explanations in artificial intelligence (AI)-based medical technology. I will build on the notion of “explicability” due to Floridi, which considers both the intelligibility and accountability of AI systems to be important for truly delivering AI-powered services that strengthen autonomy, beneficence, and fairness. I maintain that explicable algorithms do, in fact, strengthen these ethical principles in medicine, e.g., in terms of direct patient–physician contact, as well as on a longer-term epistemological level by facilitating scientific progress that is informed through practice. With this article, I will therefore attempt to counter arguments against demands for explicable AI in medicine that are based on a notion of “whatever heals is right.” I will elucidate my elaboration on the positive aspects of explicable AI in medicine as well as by pointing out risks of non-explicable AI.

Original languageEnglish
Article number50
JournalPhilosophy and Technology
Volume35
Issue number2
ISSN2210-5433
DOIs
Publication statusPublished - 06.2022

Fingerprint

Dive into the research topics of 'On the Ethical and Epistemological Utility of Explicable AI in Medicine'. Together they form a unique fingerprint.

Cite this