Toggle Main Menu Toggle Search

Open Access padlockePrints

Towards a unified approach to formal "risk of bias" assessments for causal and descriptive inference

Lookup NU author(s): Dr Gavin StewartORCiD

Downloads


Licence

This work is licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0).


Abstract

© The Author(s) 2026. Statistics is sometimes described as the science of reasoning under uncertainty. Statistical models provide one view of this uncertainty, but what is frequently neglected is the “invisible” portion of uncertainty: that assumed not to exist once a model has been fitted to some data. Systematic errors, i.e. bias, in data relative to some model and inferential goal can seriously undermine research conclusions, and qualitative and quantitative techniques have been created across several disciplines to quantify and generally appraise such potential biases. Perhaps best known are so-called “risk of bias” assessment instruments used to investigate the likely quality of randomised controlled trials in medical research. However, the logic of assessing the risks caused by various types of systematic error to statistical arguments applies far more widely. This logic applies even when statistical adjustment strategies for potential biases are used, as these frequently make assumptions (e.g. data “missing at random”) that can rarely be empirically guaranteed. Mounting concern about such situations can be seen in the increasing calls for greater consideration of biases caused by nonprobability sampling in descriptive inference (e.g. in survey sampling), and the statistical generalisability of in-sample causal effect estimates in causal inference. Both of these relate to the consideration of model-based and wider uncertainty when presenting research conclusions from models. Given that model-based adjustments are never perfect, we argue that qualitative risk of bias reporting frameworks for both descriptive and causal inferential arguments should be further developed and made mandatory by journals and funders. It is only through clear statements of the limits to statistical arguments that consumers of research can fully judge their value for any given application.


Publication metadata

Author(s): Pescott OL, Boyd RJ, Powney GD, Stewart GB

Publication type: Article

Publication status: Published

Journal: Quality and Quantity

Year: 2026

Pages: Epub ahead of print

Online publication date: 16/03/2026

Acceptance date: 02/03/2026

Date deposited: 13/04/2026

ISSN (print): 0033-5177

ISSN (electronic): 1573-7845

Publisher: Springer Science and Business Media B.V.

URL: https://doi.org/10.1007/s11135-026-02687-0

DOI: 10.1007/s11135-026-02687-0


Altmetrics

Altmetrics provided by Altmetric


Funding

Funder referenceFunder name
Natural Environment Research Council award number NE/R016429/1
UK’s Natural Environment Research Council Exploring the Frontiers award number NE/X010384/1 "Biodiversity indicators from nonprobability samples: Interdisciplinary learning for science and society"

Share