Relational Ethics and Structural Epistemic Injustice of AI in Medicine

Christian Herzog*, Jason Branford

*Corresponding author for this work

Abstract

With this contribution, we propose an initial, relational ethics approach to epistemic inclusion in the development and application of medical decision support systems. We revisit issues of epistemic injustice in various forms of medical AI and consider different orders of epistemic oppression. We argue that AI-based decision support risks excluding patients and medical personnel from relevant epistemic processes vital to good medical practice. For example, relating to a subject’s lifeworld, value sets, assumptions concerning good health, and means of overcoming or living with disease. By recognizing medical decision support systems as mediators of shared epistemic resources between patients, medical personnel, and medical research, we contend that a concern for epistemic inclusion ought to guide their conception and development. A relational ethics-based consideration of these epistemic processes further illuminates the structural character of these forms of epistemic exclusion. Ultimately, our approach seeks to reinstate the epistemic privilege of those perspectives being marginalized by these technologies and challenges the proclaimed impending epistemic obligation to utilize AI-based tools as the state-of-the-art in medical diagnosis and, perhaps even, therapy and its planning.

Original languageEnglish
Article number160
JournalPhilosophy and Technology
Volume38
Issue number4
ISSN2210-5433
DOIs
Publication statusPublished - 12.2025

Fingerprint

Dive into the research topics of 'Relational Ethics and Structural Epistemic Injustice of AI in Medicine'. Together they form a unique fingerprint.

Cite this