The fairness of machine learning systems has recently received much attention, and this discussion has also reached the medical community. When is a medical machine learning system “fair”, and what can be done to “improve its fairness”? Here, I will focus on performance differences between patient groups, which are one prominent fairness concern: a trained model may be better at detecting a disease or predicting outcomes in one group compared to others. Why does this happen? And what is the interplay between group representation in the training set and the resulting performance differences between groups? In this talk, I will attempt to shed some light on these questions, and I will highlight consequences for the path towards (more) equal model performance.
Period2022 → …
Degree of RecognitionRegional

Research Area or Academic Center

  • Centers: Center for Artificial Intelligence Luebeck (ZKIL)
  • Research Area: Intelligent Systems