TY - JOUR
T1 - Algorithmic fairness in precision psychiatry
T2 - analysis of prediction models in individuals at clinical high risk for psychosis
AU - Şahin, Derya
AU - Kambeitz-Ilankovic, Lana
AU - Wood, Stephen
AU - Dwyer, Dominic
AU - Upthegrove, Rachel
AU - Salokangas, Raimo
AU - Borgwardt, Stefan
AU - Brambilla, Paolo
AU - Meisenzahl, Eva
AU - Ruhrmann, Stephan
AU - Schultze-Lutter, Frauke
AU - Lencer, Rebekka
AU - Bertolino, Alessandro
AU - Pantelis, Christos
AU - Koutsouleris, Nikolaos
AU - Kambeitz, Joseph
N1 - Publisher Copyright:
Copyright © The Author(s), 2023. Published by Cambridge University Press on behalf of the Royal College of Psychiatrists.
PY - 2024/2/8
Y1 - 2024/2/8
N2 - Background Computational models offer promising potential for personalised treatment of psychiatric diseases. For their clinical deployment, fairness must be evaluated alongside accuracy. Fairness requires predictive models to not unfairly disadvantage specific demographic groups. Failure to assess model fairness prior to use risks perpetuating healthcare inequalities. Despite its importance, empirical investigation of fairness in predictive models for psychiatry remains scarce. Aims To evaluate fairness in prediction models for development of psychosis and functional outcome. Method Using data from the PRONIA study, we examined fairness in 13 published models for prediction of transition to psychosis (n = 11) and functional outcome (n = 2) in people at clinical high risk for psychosis or with recent-onset depression. Using accuracy equality, predictive parity, false-positive error rate balance and false-negative error rate balance, we evaluated relevant fairness aspects for the demographic attributes 'gender' and 'educational attainment' and compared them with the fairness of clinicians' judgements. Results Our findings indicate systematic bias towards assigning less favourable outcomes to individuals with lower educational attainment in both prediction models and clinicians' judgements, resulting in higher false-positive rates in 7 of 11 models for transition to psychosis. Interestingly, the bias patterns observed in algorithmic predictions were not significantly more pronounced than those in clinicians' predictions. Conclusions Educational bias was present in algorithmic and clinicians' predictions, assuming more favourable outcomes for individuals with higher educational level (years of education). This bias might lead to increased stigma and psychosocial burden in patients with lower educational attainment and suboptimal psychosis prevention in those with higher educational attainment.
AB - Background Computational models offer promising potential for personalised treatment of psychiatric diseases. For their clinical deployment, fairness must be evaluated alongside accuracy. Fairness requires predictive models to not unfairly disadvantage specific demographic groups. Failure to assess model fairness prior to use risks perpetuating healthcare inequalities. Despite its importance, empirical investigation of fairness in predictive models for psychiatry remains scarce. Aims To evaluate fairness in prediction models for development of psychosis and functional outcome. Method Using data from the PRONIA study, we examined fairness in 13 published models for prediction of transition to psychosis (n = 11) and functional outcome (n = 2) in people at clinical high risk for psychosis or with recent-onset depression. Using accuracy equality, predictive parity, false-positive error rate balance and false-negative error rate balance, we evaluated relevant fairness aspects for the demographic attributes 'gender' and 'educational attainment' and compared them with the fairness of clinicians' judgements. Results Our findings indicate systematic bias towards assigning less favourable outcomes to individuals with lower educational attainment in both prediction models and clinicians' judgements, resulting in higher false-positive rates in 7 of 11 models for transition to psychosis. Interestingly, the bias patterns observed in algorithmic predictions were not significantly more pronounced than those in clinicians' predictions. Conclusions Educational bias was present in algorithmic and clinicians' predictions, assuming more favourable outcomes for individuals with higher educational level (years of education). This bias might lead to increased stigma and psychosocial burden in patients with lower educational attainment and suboptimal psychosis prevention in those with higher educational attainment.
UR - http://www.scopus.com/inward/record.url?scp=85176365435&partnerID=8YFLogxK
U2 - 10.1192/bjp.2023.141
DO - 10.1192/bjp.2023.141
M3 - Journal articles
C2 - 37936347
AN - SCOPUS:85176365435
SN - 0007-1250
VL - 224
SP - 55
EP - 65
JO - British Journal of Psychiatry
JF - British Journal of Psychiatry
IS - 2
ER -