Abstract
In this article, I will argue in favor of both the ethical and epistemological utility of explanations in artificial intelligence (AI)-based medical technology. I will build on the notion of “explicability” due to Floridi, which considers both the intelligibility and accountability of AI systems to be important for truly delivering AI-powered services that strengthen autonomy, beneficence, and fairness. I maintain that explicable algorithms do, in fact, strengthen these ethical principles in medicine, e.g., in terms of direct patient–physician contact, as well as on a longer-term epistemological level by facilitating scientific progress that is informed through practice. With this article, I will therefore attempt to counter arguments against demands for explicable AI in medicine that are based on a notion of “whatever heals is right.” I will elucidate my elaboration on the positive aspects of explicable AI in medicine as well as by pointing out risks of non-explicable AI.
| Original language | English |
|---|---|
| Article number | 50 |
| Journal | Philosophy and Technology |
| Volume | 35 |
| Issue number | 2 |
| ISSN | 2210-5433 |
| DOIs | |
| Publication status | Published - 06.2022 |
UN SDGs
This output contributes to the following UN Sustainable Development Goals (SDGs)
-
SDG 3 Good Health and Well-being
-
SDG 4 Quality Education
-
SDG 5 Gender Equality
-
SDG 8 Decent Work and Economic Growth
-
SDG 9 Industry, Innovation, and Infrastructure
-
SDG 10 Reduced Inequalities
-
SDG 11 Sustainable Cities and Communities
-
SDG 12 Responsible Consumption and Production
-
SDG 16 Peace, Justice and Strong Institutions
Fingerprint
Dive into the research topics of 'On the Ethical and Epistemological Utility of Explicable AI in Medicine'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver