Toggle Main Menu Toggle Search

Open Access padlockePrints

Tackling algorithmic bias and promoting transparency in health datasets: the STANDING Together consensus recommendations

Lookup NU author(s): Dr Francis McKayORCiD

Downloads


Licence

This work is licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0).


Abstract

© 2025 World Health OrganizationWithout careful dissection of the ways in which biases can be encoded into artificial intelligence (AI) health technologies, there is a risk of perpetuating existing health inequalities at scale. One major source of bias is the data that underpins such technologies. The STANDING Together recommendations aim to encourage transparency regarding limitations of health datasets and proactive evaluation of their effect across population groups. Draft recommendation items were informed by a systematic review and stakeholder survey. The recommendations were developed using a Delphi approach, supplemented by a public consultation and international interview study. Overall, more than 350 representatives from 58 countries provided input into this initiative. 194 Delphi participants from 25 countries voted and provided comments on 32 candidate items across three electronic survey rounds and one in-person consensus meeting. The 29 STANDING Together consensus recommendations are presented here in two parts. Recommendations for Documentation of Health Datasets provide guidance for dataset curators to enable transparency around data composition and limitations. Recommendations for Use of Health Datasets aim to enable identification and mitigation of algorithmic biases that might exacerbate health inequalities. These recommendations are intended to prompt proactive inquiry rather than acting as a checklist. We hope to raise awareness that no dataset is free of limitations, so transparent communication of data limitations should be perceived as valuable, and absence of this information as a limitation. We hope that adoption of the STANDING Together recommendations by stakeholders across the AI health technology lifecycle will enable everyone in society to benefit from technologies which are safe and effective.


Publication metadata

Author(s): Alderman JE, Palmer J, Laws E, McCradden MD, Ordish J, Ghassemi M, Pfohl SR, Rostamzadeh N, Cole-Lewis H, Glocker B, Calvert M, Pollard TJ, Gill J, Gath J, Adebajo A, Beng J, Leung CH, Kuku S, Farmer L-A, Matin RN, Mateen BA, McKay F, Heller K, Karthikesalingam A, Treanor D, Mackintosh M, Oakden-Rayner L, Pearson R, Manrai AK, Myles P, Kumuthini J, Kapacee Z, Sebire NJ, Nazer LH, Seah J, Akbari A, Berman L, Gichoya JW, Righetto L, Samuel D, Wasswa W, Charalambides M, Arora A, Pujari S, Summers C, Sapey E, Wilkinson S, Thakker V, Denniston A, Liu X

Publication type: Review

Publication status: Published

Journal: The Lancet Digital Health

Year: 2025

Volume: 7

Issue: 1

Pages: e64-e88

Print publication date: 01/01/2025

Online publication date: 18/12/2024

Acceptance date: 02/04/2024

ISSN (electronic): 2589-7500

Publisher: Elsevier Ltd

URL: https://doi.org/10.1016/S2589-7500(24)00224-3

DOI: 10.1016/S2589-7500(24)00224-3

Data Access Statement: A detailed summary of the way in which each individual item was modified during development of these recommendations is provided in the appendix (pp 10–43). This summary includes the performance of each item across all rounds of the Delphi study. Anonymised raw data from each Delphi voting round and the r code used to generate plots and summary statistics can be requested for the purpose of verifying the findings of this research via an email sent to the corresponding author. Data relating to questions that required free-text responses and those relating to demographic attributes will be redacted. Other relevant study documentation (specifically, the wording of the questions asked across all three Delphi survey rounds) are provided in the appendix (pp 44–154).


Share