Toggle Main Menu Toggle Search

Open Access padlockePrints

Optimising the ingredients for evaluation of the effects of intervention

Lookup NU author(s): Professor David Howard

Downloads

Full text for this publication is not currently held within this repository. Alternative links are provided below where available.


Abstract

Background: In Howard, Best, and Nickels (2015, Optimising the design of intervention studies: Critiques and ways forward. Aphasiology, 2015.), we presented a set of ideas relevant to the design of single-case studies for evaluation of the effects of intervention. These were based on our experience with intervention research and methodology, and a set of simulations. Our discussion and conclusions were not intended as guidelines (of which there are several in the field) but rather had the aim of stimulating debate and optimising designs in the future. Our paper achieved the first aimit received a set of varied commentaries, not all of which felt we were optimising designs, and which raised further points for debate.Aims: This paper responds to the commentaries and in the context of recent guidelines for evaluation of the design of single-case studies. We aim to further the discussion our target article has started and extend the scope of the discussion more broadly to issues that were not raised in our target article (e.g., replication).Main Contributions and Conclusions: It is clear that there is a strong consensus that adequately designed single-case studies of intervention are an appropriate and important tool in our quest for effective interventions with people with cognitive disorders. It is also the case that many agree that there is no single design that is appropriate for every intervention, every participant or every question. However, whichever design is used it must be able to discriminate between the true effect of an intervention on behaviour, and other potential reasons for change (e.g., practice effects, spontaneous recovery, Hawthorne effects, and placebo effects). We have suggested that, depending on the conditions and question to be addressed, this can be achieved using a combination of design features. These may include: multiple pre-treatment baselines, treated and untreated (or subsequently treated) items/processes/tasks, control tasks (not predicted to be affected by treatment even when generalisation is expected), and a cross-over phase (replication across items/tasks). In addition, the outcome of treatment should be evaluated statistically.We note that generalisation, which is clinically desirable, can lead to particular difficulties in attributing change to intervention unless appropriate controls have been included, and that when items are selected on the basis of poor pre-treatment performance, apparent treatment-related gains may in fact be due to regression to the mean and discuss the implications of this for future research.


Publication metadata

Author(s): Nickels L, Best W, Howard D

Publication type: Article

Publication status: Published

Journal: Aphasiology

Year: 2015

Volume: 29

Issue: 5

Pages: 619-643

Online publication date: 30/01/2015

ISSN (print): 0268-7038

ISSN (electronic): 1464-5041

Publisher: Routledge

URL: http://dx.doi.org/10.1080/02687038.2014.1000613

DOI: 10.1080/02687038.2014.1000613


Altmetrics

Altmetrics provided by Altmetric


Share