Skip to main content

Table 5 Actions in refining, documenting and planning for future evaluation (based on all approaches in taxonomy)

From: Taxonomy of approaches to developing interventions to improve health: a systematic methods overview

Domain

Action

Methods

5. Refining

14. Test on small samples for feasibility and acceptability and make changes to the intervention if possible [17, 24, 27, 28, 30, 31, 34, 53, 54, 57]

Authors of a range of approaches recommend iterative testing and formative evaluation for this action. Some recommend qualitative research with those receiving and delivering the intervention. For example, think aloud interviews with the target population as they use the intervention, videos of people using the intervention [17, 30, 31, 57] or asking users to keep a diary of issues arising when using the intervention to prompt their memory during interviews. For some approaches the prototype discussed in the Creating domain is used to obtain specific views rather than general views on the intervention [31, 44, 45] and the use of observation moves beyond people’s views. The process can start small, for example with only the development team commenting on the first prototype and then widening the sample to members of the target population [31]. Quantitative as well as qualitative research is recommended, for example pre-test post-test comparison looking for changes in some intermediate outcomes for a small number of the target population using the intervention [34]. The potential for unintended consequences can also be considered [48].

15. Test on a more diverse population, moving away from the single setting where early development of the intervention took place and seeking a more diverse sample.

This can involve asking questions such as ‘is it working as intended?’, ‘is it achieving short term goals?’, ‘is it having serious adverse effects’? [17, 24, 25, 28, 31, 34, 54, 57]

The iterative approach used in Action 14 continues here by making changes to the intervention and continuing to use mixed methods to check if changes are working as planned on more diverse samples. Authors of a range of approaches recommend using pre-test post-test design, n-of-1 trials and observation or video to consider acceptability and early feasibility. They also recommend using real members of the target population in a real-life environment to identify interactions and relationships between different service providers and patients to iteratively modify the intervention. Groups of wider stakeholders can review the intervention as it iterates [24, 54].

16. Optimise the intervention for efficiency prior to full RCT [34, 49,50,51,52]

Some approaches consider Actions 14 and 15 to be part of the process of optimisation of the intervention through the use of mixed methods. Case series can be used to consider issues such as dose, patterns of use over time, and safety [34]. Efficiency-based approaches and some stepped/phase approaches take a more quantitative approach: fractional factorial designs can be used to identify active components, interactions between components of an intervention, the doses that lead to best outcomes and tailoring to sub-groups [34, 50]. A review of optimisation of interventions has been published [4].

6. Documenting

17. Document the intervention, describing the intervention so others can use it and offer instructions on how to train practitioners delivering the intervention and on how to implement the intervention [7, 17, 24, 26, 29, 34, 48, 57]

This document is sometimes called a manual. The manual is written by the developers. Authors of some approaches recommend that it undergoes external review by stakeholders, including the target population and those delivering the intervention, to make sure it is feasible for use in the real world [24, 26, 29].

7. Planning for future evaluation

18. Develop the objectives of the outcome and process evaluations.

This includes determining how outcomes and mediators of change can be measured, developing measures, specifying evaluation design, planning recruitment and considering feasibility of a full RCT [24, 27, 29, 31, 40, 47, 48]

Authors of some approaches recommend planning for a randomised study or experimental design with controls for measuring effectiveness [29], whereas others recognise that there may not always be the resources to evaluate with a RCT so the intervention may be implemented in practice and monitored [31].

Authors of some approaches recommend involving stakeholders such as funders, implementers and the target population in this action [27], particularly getting agreement with key stakeholders about how to define and measure success [48]. Partnership approaches recommend that this is undertaken with stakeholders [40].