A new era for intervention development studies
Pilot and Feasibility Studies volume 1, Article number: 36 (2015)
This editorial introduces a new special series on intervention development in the on-line open access journal Pilot and Feasibility Studies. An intervention development study reports the rationale, decision-making processes, methods and findings which occur between the idea or inception of an intervention until it is ready for formal feasibility, pilot or efficacy testing prior to a full trial or evaluation. This editorial begins to explore some of the challenges associated with this early research stage. It commences a debate about how to produce novel interventions which are fit for purpose and which solve important health and social care problems. By transparently reporting more intervention development studies, scientific rigour will be improved and everyone can learn from the experiences of others.
Intervention development can be viewed as a “black box” or the “Cinderella” of complex intervention trial design. This is because important processes and decision-making in the early stages of intervention development are seldom reported and until now, journals have shown little interest in publishing such studies. Intervention development studies exist in small grant reports and PhD chapters and tend to gather dust on the shelf as researchers move on to secure larger grants and new projects. So anecdotally, researchers encounter recurring pitfalls, spend time in blind alleys and worry about intervention decisions, with little guidance available. In addition, until recently, UK research funding institutions have not prioritised investment in complex intervention development.
The importance of methodological rigour at this early stage is recognised , and there is research waste from developing interventions that never impact on health care . With ageing populations, multi-morbidity and lifestyle behaviours that seem remarkably resistant to change effective interventions are needed. In this special series, we begin to open the black box of intervention development. We are particularly interested in complex interventions, where there are several interacting components, rather than drugs or invasive devices which have regulated development processes.
How are intervention development studies defined?
A working definition of an intervention development study for this special series is
A study that describes the rationale, decision making processes, methods and findings which occur between the idea or inception of an intervention until it is ready for formal feasibility, pilot or efficacy testing prior to a full trial or evaluation.
Put simply, it is a study about the what, why, how and when decisions involved in specifying an intervention, so that it can be replicated with fidelity by others. This is sometimes called an “intervention manual” which is prospectively written prior to a trial and includes any training required to deliver the intervention. This is a working definition of intervention development studies because there is remarkably little literature. Definitions and use of language are crucial. The UK National Institute of Health Research (NIHR) glossary does not define an intervention development study per se, but defines an intervention as
The process of intervening on people, groups, entities or objects in an experimental study. In controlled trials, the word is sometimes used to describe the regimens in all comparison groups, including placebo and no-treatment arms .
It could be argued that the tail is wagging the dog (rather than the dog wagging its tail) in that intervention development studies are now likely to be shaped by the TIDieR guidance which aims to improve the systematic reporting of interventions .
There is a grey area around overlap with and definitions of feasibility studies. The NIHR definition of a feasibility study is “Can this study be done?” and expects that
A robust case be made for the plausibility of the intervention and clinical importance of any subsequent full trial .
Applying this definition, feasibility can apply within an intervention development study as described by Murray and colleagues in this issue  e.g. can these components work together? Are they likely to be summative, synergistic or detract? Feasibility can also apply to the operational aspects of trial design (e.g. recruitment, engagement, retention and outcome assessment) which do not feature in an intervention manual. The examples provided by Yardley and colleagues in this issue  make a strong case for assessing aspects of operational feasibility right from the start of intervention development. An extension of CONSORT guidelines for pilot and feasibility studies is in progress and this has given rise to much debate about the definitions of pilot and feasibility studies . For example, the scope of the Medical Research Council (MRC) Public Health Intervention Development funding scheme  includes research to inform sample size, power calculations, and estimates of recruitment and retention which are usually assessed in a formal pilot or feasibility study.
What do we know about intervention development studies?
Little is known about the number of intervention development studies that are tested in a formal pilot, proceed to a full evaluation and are implemented into routine practice. From 101 reports of new medical discoveries, over-optimism was evident as only one led to the development of an intervention that was widely used . Optimistic bias is therefore important in intervention development studies. Assessing operational bias in systematic reviews is routine , and there are particular issues for assessing bias in behaviour change trials , but less attention has been paid to bias when developing an intervention.
What strategies might help to identify and reduce optimistic bias? Karl Popper proposed the discipline of falsification in the scientific method and the importance of challenging all knowledge, by proactively seeking to disconfirm it . From thousands of observations of swans in the UK, you would conclude that all swans are white. Searching in unexpected places is required to find the black swan. Daniel Kahneman states overconfidence as endemic in medicine and describes the planning fallacy, where plans are unrealistically close to best case scenarios . Kahneman recommends gaining an outside view, for example by examining statistical data from similar projects. Others highlight the importance of qualitative methods and triangulation approaches to critique or validate quantitative data and provide a personal perspective .
Groupthink  is where cognitively homogeneous groups have strong allegiances, tend not to voice dissent, rationalise away counterarguments and are confident in their plans. What happens within groups is complex , and strategies to avoid groupthink in the early stages of trial design have received little attention. Research teams and expert panels can benefit from selecting independent thinkers  to be the critical friend, the dragon in the den or devil’s advocate. Citation bias, where research teams with prestige are disproportionately cited, can be a problem which contributes to research waste . Groupthink can result in premature conceptual closure, the collection and reporting of confirming data only and for “assumption habits” or blind spots to be unrecognised. A recent example is where a dominant behaviour change theory, the theory of planned behaviour, has underpinned interventions for three decades and is no longer considered valid . It provides valuable insights into how un-expected findings can be attributed to operational flaws rather than questioning the validity of the theory itself. It can be argued, therefore, that methods which tend towards consensus (e.g. focus groups; Delphi techniques) are only appropriate when finalising an intervention specification.
Sampling, location and context bias should be considered, regardless of the methods used to develop an intervention. For example, when designing smartphone applications, sampling and location decisions can influence the data collected about the intervention use . Certain groups or settings may not be included e.g. ethnic minorities, rural communities. Convenience sampling bias is where participants or settings overtly or covertly have something in common which limits generalizability. This may be unavoidable when resources are limited and can be accounted for in later pilot testing. Constructing a diversity sampling matrix  and using several interviewers from different disciplines to minimise interviewer bias are useful strategies to increase the range of perspectives. Proactively challenging and testing assumptions with follow-up of any counter-intuitive findings are important.
Intervention development is seldom a fixed prospective linear process
It is apparent that isolating intervention development as a stand-alone research stage, study or report may not be appropriate. Even in pilot trials, defined as a “smaller version of the main study used to test whether the components of the main study can all work together” , further tinkering with the intervention may be warranted prior to a full trial. This is because health service contexts vary, change and interact with complex interventions in many different ways . Unlike the constant enclosed environment of a petri dish for introducing new antibiotics into a microbe culture—interventions where human relationships form a significant part are dynamic and vulnerable to unexpected events. The meanings attributed to the intervention by people and the accompanying visuals, emotions and narrative are well recognised in commercial product development  but have been relatively neglected in health research. Innovation approaches are changing . In the human computer interaction field, an intervention needs to constantly adapt to rapid innovation and a cyclical, iterative approach is recommended for generating hypotheses for mini tests [20, 24]. Mini tests or pilots generate data that can validate decisions and challenge critical assumptions at an earlier stage and quickly when developing an intervention . This contrasts with more labour-intensive approaches where extensive modelling and theory generation precede any testing. Rapid validation mini tests can overcome one of the limitations of qualitative research in intervention development: when people are asked about hypothetical interventions, they can say what they find acceptable, need or want, but their subsequent actions can be quite different . More tangible mini-testing processes within the intended intervention context have some parallels to action research  and quality improvement approaches like the plan-do-study-act cycle .
What guidance is available?
The Medical Research Council guidance for complex interventions does not define an intervention development study per se but poses helpful questions for researchers . In Table 1, I propose adaptations to the MRC Guidance questions to reflect more recent literature and debate.
Questions 1–3 are unchanged. The TIDieR guidelines are incorporated into question 4. Question 5 is reworded to reflect the absence, heterogeneity or inadequate reporting of interventions in systematic reviews. Strategies for intervention synthesis to select the “best bets” when translating systematic review evidence into intervention development are proposed . Novel intervention development may be justified, for example when a series of null trials in a particular context is counter to the international systematic review evidence , or existing interventions may have limited reach for particular groups, for example the uptake of weight loss interventions by men compared to women . Question 6 is reworded to consider future implementation barriers and facilitators for both a full trial and translation into routine care. Question 7 is new and asks the following: has the potential for bias been considered? It proposes optimistic bias, group think, sampling, location and context biases as important examples.
There are many challenges ahead for intervention development studies. How do the academic community and policy makers weigh up the pros and cons of “slow research” versus efficient designs? How do research teams make the myriad of small decisions to finalise intervention manuals, particularly when systematic review evidence is lacking? How do traditional graded evidence , inductive, deductive and abductive logic , tacit and explicit knowledge , fast and slow thinking  and creativity contribute to intervention development? How can methods and decision-making processes be reported in a rigorous scientific and transparent way? How useful would checklists or frameworks be when commercial technology companies like Google (see https://research.google.com/) achieve rapid innovation with impact through flexibility?
This series commences a debate which involves the research community, patients, clinicians, the public, charities and anyone with an interest in developing the best possible interventions for health and wellbeing. It aims to raise the profile and value of intervention development research, to open the black box and begin to unpack the experiences, methods, processes and outcomes. This will help to promote cross-disciplinary learning and stimulate debate about how to produce novel interventions to solve important health and social care problems. Intervention development is where creativity, science and art meet and the balance is delicate. Future qualitative synthesis of intervention development studies could have the potential to illuminate why early intervention promise so often disappears, resulting in research waste. Imperative are the principles of high-quality scientific reporting and rigour, with descriptions of how patients, the public, clinicians, relevant staff and policy makers are involved in the decision-making processes. Transparent reporting of both successes and failures is crucial.
Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M: Developing and evaluating complex interventions: new guidance. Medical Research Council. 2008;1–39.https://www.mrc.ac.uk/documents/pdf/complex-interventions-guidance/
Chalmers I, Bracken MB, Djulbegovic B, Garattini S, Grant J, Gülmezoglu AM, et al. How to increase value and reduce waste when research priorities are set. The Lancet. 2014;383(9912):156–65.
National Institute of health Research Glossary Available at: http://www.nets.nihr.ac.uk/glossary?result_1655_result_page=I (Accessed 24/8/2015)
Hoffmann TC, Glasziou PP, Boutron I, Milne R, Perera R, Moher D, et al. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ. 2014;348:g1687.
Murray J, Williams B, Hoskins G, Skar S, McGhee J, Treweek S, Sniehotta FF, Sheikh A, Brown G, Hagen S, Cameron L, Jones C, Gauld D: An interdisciplinary approach to developing visually mediated interventions to change behaviour; an asthma and physical activity intervention exemplar. Pilot and Feasibility Studies 2015, in press.
Yardley L, Ainsworth B, Arden-Close E, Muller I: The person-based approach to enhancing the acceptability and feasibility of interventions pilot and feasibility studies 2015, in press.
Lancaster GA. Pilot and feasibility studies come of age! Pilot and Feasibility Studies. 2015;1(1):1.
Medical Research Council Public Health Intervention Development Scheme. Available at: http://www.mrc.ac.uk/funding/browse/public-health-intervention-development-scheme-phind/ (Accessed 24/8/2015)
Contopoulos-Ioannidis DG, Alexiou GA, Gouvias TC, Ioannidis JP. Medicine. Life cycle of translational research for medical interventions. Science. 2008;321(5894):1298–9.
Higgins JPT, Altman DG, Sterne JAC, Cochrane Statistical Methods Group, Cochrane Bias Methods Group. Chapter 8: Assessing risk of bias in included studies. The Cochrane Collaboration; 2011. Available at: http://www.cochrane-handbook.org/ (Accessed 24/8/2015)
de Bruin M, McCambridge J, Prins JM. Reducing the risk of bias in health behaviour change trials: improving trial design, reporting or bias assessment criteria? A review and case study. Psychol Health. 2015;30(1):8–34.
Popper K: The logic of scientific discovery: Routledge Classics; 2005. ISBN: 9780415278447.
Kahneman D: Thinking, fast and slow: Penguin; 2011. ISBN: 9780141033570.
Jick TD. Mixing qualitative and quantitative methods: triangulation in action. Adm Sci Q. 1979;24(4):602–11.
Janis I. Groupthink: psychological studies of policy decisions and fiascoes: first ed. Boston: Houghton Mifflin; 1982.
Hoddinott P, Allan K, Avenell A, Britten J. Group interventions to improve health outcomes: a framework for their design and delivery. BMC Public Health. 2010;10(1):800.
Surowiecki J. The wisdom of crowds: why the many are smarter than the few. London. Abacus: New Edition; 2005.
Ioannidis JP, Greenland S, Hlatky MA, Khoury MJ, Macleod MR, Moher D, et al. Increasing value and reducing waste in research design, conduct, and analysis. The Lancet. 2014;383(9912):166–75.
Sniehotta FF, Presseau J, Araújo-Soares V. Time to retire the theory of planned behaviour. Health Psychology Review. 2014;8(1):1–7.
Lathia N, Rachuri KK, Mascolo C, Rentfrow PJ. Contextual dissonance: Design bias in sensor-based experience sampling methods. In Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing: ACM; 2013. Available at: http://184.108.40.206/~cm542/papers/ubicomp2013.pdf (Accessed 24/8/2015)
Ritchie J, Lewis J, Nicholls CM, Ormston R: Qualitative research practice: A guide for social sciencestudents and researchers. London: Sage Publications; 2013. ISBN: 978-0-7619-7110-8.
Wells M, Williams B, Treweek S, Coyle J, Taylor J. Intervention description is not enough: evidence from an in-depth multiple case study on the untold role and impact of context in randomised controlled trials of seven complex interventions. Trials. 2012;13(1):95–111.
Asch DA, Rosin R. Innovation as discipline, not fad. N Engl J Med. 2015;373(7):592–4.
Lathia N, Pejovic V, Rachuri KK, Mascolo C, Musolesi M, Rentfrow PJ. Smartphones for large-scale behavior change interventions. IEEE Pervasive Computing. 2013;3:66–73.
Morrison B, Lilford R. How can action research apply to health services? Qual Health Res. 2001;11(4):436–49.
Taylor MJ, McNicholas C, Nicolay C, Darzi A, Bell D, Reed JE. Systematic review of the application of the plan-do-study-act method to improve quality in healthcare. BMJ Quality & Safety. 2013;0:1–9.
Glasziou PP, Chalmers I, Green S, Michie S. Intervention synthesis: a missing link between a systematic review and practical treatment(s). PLoS Med. 2014;11(8), e1001690.
Hoddinott P, Seyara R, Marais D. Global evidence synthesis and UK idiosyncrasy: why have recent UK trials had no significant effects on breastfeeding rates? Maternal & Child Nutrition. 2011;7:221–7.
Robertson C, Archibald D, Avenell A, Douglas F, Hoddinott P, Boyers D, Stewart F, Boachie C, Fioratou E, Wilkins D: Systematic reviews of and integrated report on the quantitative, qualitative and economic evidence base for the management of obesity in men. 2014, 18 (35).
Blaikie N: Designing social research: Polity Press, Cambridge; 2009. p24-29. ISBN: 978-0-7456-1767-1.
Greenhalgh T. What is this knowledge that we seek to “exchange”? Milbank Q. 2010;88(4):492–9.
The author would like to acknowledge many discussions with colleagues about intervention development. In particular: those who participated in an intervention development day at the Nursing, Midwifery and Allied Health Professions Research Unit (NMAHP-RU); ongoing discussions with Eddie Duncan, Emma France, Suzanne Hagen, Sara Levati, Mary Wells and Brian Williams at NMAHP-RU; with Alicia O’Cathain, University of Sheffield and members of the MRC CONDUCT II Hub at the University of Bristol, particularly Jane Blazeby, Jenny Donovan and Katrina Turner; a workshop organised by the University of Glasgow Institute of Health and Wellbeing on “What Works in Digital Health” in July 2015 (http://www.sicsa.ac.uk/events/sicsa-ux-mhealth-works-digital-health/). I would like to thank Emma France, NMAHP-RU, University of Stirling for commenting on an earlier draft of this article.
The author declares that she has no competing interests.
The author was commissioned to write this article by BMC Pilot and Feasibility Studies and would like to thank Daniel Shanahan and Ella Flemyng for their support. No funding was received. The Nursing, Midwifery and Allied Health Professions Research Unit, University of Stirling is core-funded by the Chief Scientist Office of the Scottish Government Health and Social Care Directorates. The views expressed are those of the author alone.
About this article
Cite this article
Hoddinott, P. A new era for intervention development studies. Pilot Feasibility Stud 1, 36 (2015). https://doi.org/10.1186/s40814-015-0032-0