Inexplicable AI in Medicine as a Form of Epistemic Oppression

Christian Herzog*

*Corresponding author for this work

Abstract

This contribution portrays inexplicable AI in medicine as a form of epistemic exclusion, i.e., as the marginalization or complete elimination of important stakeholders from the knowledge generation process. To show this, I will first briefly characterize and exemplify the notion of explicability as per Floridi in the medical domain as being instrumental to support accountability through intelligible explanatory interfaces. I will then follow Dotson in delineating three different orders of epistemic oppression, the third being irreducible to either social or political oppression. I will continue to argue that inexplicable AI in medicine, in its tendency to severely hinder a decision-making process that is either shared interprofessionally and/or between patient and physician, may, under certain conditions, amount to third-order epistemic exclusion. I will discuss that it follows that the adoption of inexplicable AI in medicine may severely hamper progress to support health as conceived holistically along the lines of the WHO's constitution. Instead, the use of inexplicable AI in medicine may bring about short-term benefits, which may be tempting, but should not outweigh the long-term advantages promised by a patient-centered and individualized form of medicine. In summary, this contribution adds a novel conceptual take on the issues involved with adopting black-box AI in the medical domain that goes beyond a merely utility-based argumentation, but - in addition - depicts it as a form of epistemic exclusion, which is also wrong in itself.

Original languageEnglish
Title of host publicationIEEE International Symposium on Technology and Society
Publication date2022
Publication statusPublished - 2022

Fingerprint

Dive into the research topics of 'Inexplicable AI in Medicine as a Form of Epistemic Oppression'. Together they form a unique fingerprint.

Cite this