Trustworthy Healthcare Innovation Ecosystems — Supporting Responsible Innovation Practices by Establishing a Trustworthy Innovation Culture

Christian Herzog, Sabrina Blank, Bernd Carsten Stahl

Abstract

In this article, we explore questions about the culture of trustworthy artifcial intelligence (AI) through the lens of ecosystems. We draw on the European Commission’s Guidelines for Trustworthy AI and its philosophical underpinnings. Based
on the latter, the trustworthiness of an AI ecosystem can be conceived of as being grounded by both the so-called rationalchoice and motivation-attributing accounts—i.e., trusting is rational because solution providers deliver expected services
reliably, while trust also involves resigning control by attributing one’s motivation, and hence, goals, onto another entity.
Our research question is: What aspects contribute to a responsible AI ecosystem that can promote justifable trustworthiness
in a healthcare environment? We argue that especially within devising governance and support aspects of a medical AI ecosystem, considering the so-called motivation-attributing account of trust provides fruitful pointers. There can and should be
specifc ways and governance structures supporting and nurturing trustworthiness beyond mere reliability. After compiling
a list of preliminary requirements for this, we describe the emergence of one particular medical AI ecosystem and assess its
compliance with and future ways of improving its functioning as a responsible AI ecosystem that promotes trustworthiness.
OriginalspracheEnglisch
TitelTrusting in Care Technology or Reliance on Socio-technical Constellation?
ErscheinungsortDelmenhorst
Erscheinungsdatum2024
PublikationsstatusVeröffentlicht - 2024

Strategische Forschungsbereiche und Zentren

  • Zentren: Zentrum für Künstliche Intelligenz Lübeck (ZKIL)

DFG-Fachsystematik

  • 4.43-04 Künstliche Intelligenz und Maschinelles Lernverfahren

Zitieren