Skip to main content

Pilot and feasibility studies: extending the conceptual framework

Abstract

In 2016, we published a conceptual framework outlining the conclusions of our work in defining pilot and feasibility studies. Since then, the CONSORT extension to randomised pilot and feasibility trials has been published and there have been further developments in the pilot study landscape. In this paper, we revisit and extend our framework to incorporate the various feasibility pathways open to researchers, which include internal pilot studies. We consider, with examples, when different approaches to feasibility and pilot studies are more effective and efficient, taking into account the pragmatic decisions that may need to be made. The ethical issues involved in pilot studies are discussed. We end with a consideration of the funders’ perspective in making difficult resource decisions to include feasibility work and the policy implications of these; throughout, we provide examples of the uncertainties and compromises that researchers have to navigate to make progress in the most efficient way.

Peer Review reports

Introduction

In 2016, we published a conceptual framework for defining feasibility and pilot studies in preparation for a randomised controlled trial [1].The paper has been extensively cited and our definitions have been widely accepted by funding bodies such as the UK NIHR (National Institute for Health Research) [2] and the HRB (Health Research Board) in Ireland [3]. However, there have also been further developments in the pilot study landscape. In this article, we present a rationale for extending the conceptual framework to incorporate these developments and discuss their implications for funders, policy makers and researchers.

The 2016 conceptual framework, built on work by members of our group [4, 5], other researchers [6] and the guidance from the National Institute for Health Research (NIHR) [7] and Medical Research Council (MRC) [8] funding bodies. It is summarised in the first section of this article. The original reason for developing the framework was multifactorial. One key driver was a realisation that the terms ‘pilot’ and ‘feasibility’ study were used inconsistently and interchangeably.

Scientific communication should ensure as far as possible that words are used with a common understanding of their meaning. It is also important for the scientific community that research findings are disseminated so that the results can be used by others. We initially developed definitions for the terms pilot and feasibility [1] in line with the then NIHR [7] and MRC [8] recommendations and other current informed opinion (Table 1). We found that many publications labelled as pilot studies appeared to have these labels because they had limitations, rather than because they were being conducted as a true pilot for a subsequent definitive study. Examples of limitations include a small sample size, lack of a power calculation (so they were not definitive trials), lack of meaningful clinical outcomes or short-term follow-up. We also found that some authors incorrectly considered the use of surrogate outcomes as pilot work [9]. In effect, many studies labelled as pilot or feasibility were neither definitive nor undertaken to inform subsequent research. We suggested that the terms ‘pilot’ and ‘feasibility’ had historically been misappropriated and the labels devalued. This potentially explained why, at the time of publishing our conceptual framework in 2016, journal editors appeared to be unwilling to publish pilot and feasibility studies (see Table 1), thus limiting their availability and exposure to other researchers in the field.

Table 1 A chronology of guidance and evidence illustrating the evolving feasibility and pilot study landscape from our perspective

In 2016, in line with our original plans, we published the reporting guideline for feasibility and pilot trials as a CONSORT extension [13, 14]. In the interim, the publication of pilot and feasibility studies had been facilitated by the launch of the BMC journal, Pilot and Feasibility Studies [12] which has seen a steady increase in submissions over its 6-year life. The aim of the journal is to provide a forum for discussion around this key aspect of the scientific process, and ensure that these studies are published, so as to complete the publication thread for clinical research.

Concurrently, there has also been increasing interest from methodological experts on the design of pilot and feasibility studies, and a greater value placed on them, by funding bodies, as essential building blocks in intervention trial development. This is seen as part of a growing focus on efficiency in trial development and design [17] and includes debate on the relative merits of internal and external pilot trials. Whereas an external pilot trial, as the name implies, does not use any of the data collected in the final analysis of any subsequent main trial, an internal pilot trial is an integral first part of the definitive trial, with internal pilot trial data contributing to the final data set.

The recent methodological debate has included discussion of the term ‘exploratory study’ [15, 18]. This term has been used to describe a small underpowered study designed and written up in a similar way to a main trial with inappropriate emphasis on p-values and has been used to describe the sort of studies to which our framework ascribes the terms feasibility and pilot. In 2018, this term was debated alongside the terms ‘pilot’ and ‘feasibility’ studies at the consensus workshop run by the GUEST (GUidance for Exploratory STudies of complex public health interventions) group whose research was funded by the MRC. The term ‘exploratory study’ was subsequently not included in the updated MRC recommendations for complex interventions [19, 20]. We have therefore not referred to it specifically in the remainder of this article.

The aim of this article is to provide an update on our previously published framework that incorporates the current debates taking place in the pilot and feasibility landscape. The objectives are to incorporate three particular aspects of the current debate: (i) how aspects of uncertainty about feasibility investigators are aiming to understand and influence design decisions; (ii) the interdependency of theoretically informed pilot and feasibility study aims, the external funding environment and the policy agenda; and (iii) how internal pilot studies fit into the framework.

Expanding the 2016 conceptual framework for pilot and feasibly studies

The principles on which the 2016 conceptual framework were developed are summarised in this next section followed by a proposal to update it to include internal pilot studies. During development of the CONSORT extension for pilot and feasibility trials [10], we went through a systematic consensus exercise to agree definitions which encompassed any feasibility or pilot work. A fundamental principle was that extrapolating from standard dictionary definitions [1], we accepted that feasibility is an overarching concept to explore aspects of an intervention or of trial design for which more information was required before progressing. We termed this need for more information the ‘uncertainty’ (see Table 2).

Table 2 Definitions for pilot and feasibility studies as articulated in our conceptual framework [1]

We concluded that, at an early stage of developing an intervention where there is maximum uncertainty, a range of different methodological approaches could be used appropriate to the specific nature of the uncertainty. This mirrored the approach advocated in the MRC Framework for developing and evaluating a complex intervention at that time [8]. A feasibility study could be a qualitative exploration of stakeholders’ views on the acceptability of or specification for a proposed new service or an epidemiological study confirming or refuting the need for the proposed service [21]. Examples of non-randomised feasibility studies are given in Lancaster and Thabane’s editorial [22]. Systematic reviews of subject specific topics can also be an important methodology to include at this early stage [23]. Where there is less uncertainty, a pilot study might be conducted in which all or part of the proposed intervention or other process to be undertaken as part of a definitive trial are evaluated [24]. As uncertainties are resolved, a non-randomised before and after study might be conducted for the intervention arm only [25]. When most uncertainty has been resolved, a randomised pilot trial (external or internal to the main trial) might be more appropriate. An external pilot is likely to include either all or part of the definitive RCT, but on a smaller scale; by definition, it includes the randomisation process [26], but it might explore alternative recruitment and randomisation approaches. This often represents the end of the feasibility pathway, but the pathway is not necessarily, and indeed often is not, linear, a fact acknowledged explicitly in our framework.

The framework (shown below in Fig. 1) includes in the innermost circle the main or definitive trial, and the three categories of external feasibility studies (randomised pilot, non-randomised pilot and other feasibility) are represented in the blue concentric circle. The two-directional arrows underpin the basic principle that the process of confirming feasibility is iterative because, paraphrasing the words of a famous American, as well as known unknowns there are unknown unknowns [27]. These unknown unknowns may only emerge once preliminary empirical work has been undertaken and may require any of the other feasibility approaches to resolve, sometimes challenging earlier planned timelines.

Fig. 1
figure 1

Conceptual diagram illustrating relationship between different types of feasibility studies, adapted from PLOS One

Comparing internal and external pilot studies

In our original model, we focussed on feasibility studies that are external to the main definitive study, represented by the innermost green circle. Whilst for completeness we did include internal pilot studies in our diagrammatic model, we did not consider them any further. However, as noted in the introduction, there is an increasing trend to incorporate an internal pilot study within the main definitive trial. Usually, such internal pilot studies are conducted when most feasibility issues have been resolved, and the remaining uncertainties are generally focussed only on recruitment and randomisation. Indeed, the authors of a recently published systematic review of publicly funded trials have recommended that internal pilot studies should be used when evidence is needed for estimating recruitment, randomisation and attrition rates [28]. It is likely that this point would be reached only after preliminary feasibility work, which may have included an external pilot trial. It is important to note, however, that there is no black and white rule for when an external or internal pilot study should be used; both have their place.

As mentioned above, an internal pilot may be the preferred option if the only remaining uncertainties are about recruitment randomisation and attrition. Indeed, whether or not investigators undertake a formal internal pilot, it would be unusual for these aspects not to be monitored (and sometimes lead to modifications) in a definitive trial. However, if other uncertainties remain, particularly if they are considerable, an external pilot may be called for. Recruitment, randomisation and attrition could also be explored within an external pilot. However, the rates achieved in a pilot study may not always be replicated in a subsequent definitive trial as illustrated in a study of pharmacist prescribing in nursing homes. Building on a successful non-randomised pilot study [25], recruitment of sites to the study was much more challenging in the main definitive trial (unpublished data).

If the proposed definitive trial includes a novel aspect, this is likely to be a strong reason for undertaking an external pilot study, for example, if a new unit of randomisation is being proposed, or, a new trial design, or a new clinical setting. As a result of one of the pilot studies in Table 3 (FEMUR [29]), the definitive trial was deemed not feasible, and for one of the studies (UK-BEAM [30]), there was a decision to change the main trial design by abandoning practice level randomisation. Such re-design would have been difficult to implement following an internal pilot. In general, if a study is recruiting in a new context, or in an under-researched speciality or vulnerable or under-researched group, or uses an innovative design, then feasibility might be explored best in an external pilot trial. Conversely, whilst the COMQUOL [31] pilot study showed that a definitive trial was feasible, the subsequent trial was never undertaken because funding could not be secured. Nevertheless, it provides evidence for the feasibility of trials in secure mental health wards. It is an example of how well-conducted external pilot and feasibility study can provide valuable information to the wider research community, even if not of direct value to the researchers themselves. Finally, the STarT MSK [32] trial started as an internal pilot study but became de facto an external pilot when extensive changes had to be made.

Table 3 Examples of useful external pilot studies

Implications for funders, policy makers and researchers

As trial methodology has evolved, the importance of recruiting to target, underpinned by a justified power calculation has become paramount. McDonald et al. [33] reported that in a cohort of 114 trials funded by UKMRC and HTA between 1994 and 2002, less than a third achieved their original recruitment target and half were awarded an extension. Disappointingly, despite methodological developments, little improvement was reported for studies funded between 2004 and 2016 [34]; only 50% achieved the original target recruitment and a third extended their recruitment. Whilst in some cases trials have been continued with revised recruitment targets, other trials have been closed down prematurely by funders when it became clear during the trial that target sample sizes would not be achieved. It is not surprising, therefore, that conducting extensive feasibility work prior to a full trial has more recently become a prerequisite of securing substantive research funding from most grant giving bodies. The average budget for a full trial, based on recently funded NIHR trials, has been estimated as just under £1.2m (range £321,403–£2,099,813) [17]. It is therefore in the funder’s interest to only award grants to those trials with evidence from feasibility studies that they are likely to succeed, in other words that they have demonstrated they can recruit and retain participants and implement the intervention successfully. The issue of waste in research at all stages of the research pathway was highlighted some years ago in a seminal Lancet series of five papers published in 2014, one of which focussed on the importance of considering what sort of research to fund [35] and another on more efficient regulation and funding of research [36]. However, despite this, there is little evidence that funders in general consider these issues in their decision making [37]. The NIHR is cited as a funding body that does have a more transparent approach to funding decisions; a recent study examining outcomes of feasibility studies funded through their Research for Patient Benefit schemes concluded that these studies can potentially avoid waste and ‘de-risk funding investments of more expensive full trials’ [17].

Implications of extensive preliminary work

The consequence of the need for extensive preliminary work is an extended timeline from conceptualisation of an idea to completion of the definitive randomised controlled trial. This is a major challenge for funders and researchers and this increased time delay introduces a different sort of waste. It is estimated that the total time from initial work to completion of the definitive RCT is about 8 years [17]. Indeed, even after a successful pilot, the main trial may never happen if a further grant application needs to be written and funding secured; in the interim, priorities can change and different selection panels or funders may be involved. Further, policy makers needing evidence to inform service redesign or clinical decision making have the dilemma of delaying their decision or making a decision in the absence of the best evidence and/or commissioning a post implementation evaluation. Some short circuiting of the developmental process is therefore desirable and sometimes can be achieved by securing programme funding, or equivalent, as is possible in some countries such as the UK [38], USA [39], and Canada [40]. This longer-term award can enable seamless progression through a series of studies along the feasibility pathway culminating in a definitive trial with embedded progression criteria and stopping points. However, securing such large programme grants is challenging, often requires a two or three staged application process, and in practice is only likely to be awarded to senior investigators with a strong track record. It is not an option for less experienced research teams, and all researchers whatever their level of experience are constrained by competitive funding systems and limited funding budgets.

Ultimately, pragmatism may often over-ride methodological reasoning and scientific principles. Study design and approaches to securing funding may relate to seniority of research leads, as well as being based on how to explore specific uncertainties. Criteria for successful funding often include a request for reviewers to comment on the track record of the applicants, especially the lead applicant. A junior researcher seeking a first opportunity to become a principle investigator would not apply for a programme grant but might reasonably be able to secure funding for a smaller pilot study, designed as part of a programme of work leading to a definitive trial. Indeed, many funding bodies with limited budgets explicitly prioritise pump priming projects such as pilot studies, including external pilot trials. Whilst these may not be definitive in their own right, they have the potential to seed bigger subsequent grants. If money is available immediately for a definitive trial, perhaps through a commissioned call, then the balance may swing in favour of an internal pilot, with the benefit of reducing the timeline and allowing for some ongoing uncertainties to be resolved as the main trial proceeds. But there are times when care needs to be taken. Whilst challenges in recruitment can often be compensated for by extending to other centres, other criteria, for example based on safety, which might require further training of the service providers, could necessitate a fundamental change in the intervention and could invalidate inclusion of the internal pilot participants. The risk of such events needs to be recognised. The flow chart below (Fig. 2) illustrates in simplified form the options for study design. Given the preceding discussion, it is acknowledged that at all points in the decision-making process the options for funding may also be a consideration. As the flow chart suggests, internal and external pilot trials are not a dichotomy. There is considerable overlap in their objectives and the uncertainties they address, reinforcing that choice is often pragmatic as mentioned above and not always based on methodological requirements

Fig. 2
figure 2

Simplified flow chart incorporating both internal and external pilot studies

Discussion

It is noted that many of the approximately 3000 papers citing our previous work [1, 13, 14] do so as justification for doing a feasibility study. If we have achieved a paradigm shift in research culture, then this suggests one of the aims we originally had for undertaking this work has been achieved. Publication has also been facilitated by the launch of the Pilot and Feasibility Studies Journal, dedicated to publishing early developmental work but emphasising that this must be true feasibility work not a small underpowered, non-generalizable study. However, disappointingly, we know that the latter do still exist.

There is now popular consensus that early work to identify the optimal approach to recruiting and confirming the size of the eligible population is paramount. Additionally, the increasing introduction of complex interventions in service delivery has driven the need to understand the contents of the intervention—the black box—and to ensure as far as possible that all the components are delivered in an optimum way to maximise the chance of a successful trial outcome. In other words, it is important to ensure that a good idea is not rejected because of a failure of one part of the system which should have been identified and addressed before progressing to a definitive study. All of the above increasingly validate the systematic development approach described in all versions of the MRC Framework for developing and evaluating a complex intervention [18]. Now in its fourth version, its focus is on identifying what is uncertain and conducting work to remove that uncertainty. All versions of the MRC framework have included details on the need for feasibility work, although terminology and emphasis has subtly changed. Our conceptual framework sits well within this paradigm.

We have made the case that both external and internal pilot trials are part of an armamentarium of research approaches that can be used to address uncertainty. There is no absolute rule about when either should be adopted, and we have emphasised the importance of balancing the ideal of eliminating all uncertainties against the pragmatic need for efficiency and value for money. With that in mind, a further area of enquiry might be whether an external pilot trial can become an internal trial if nothing has been changed as shown in the flow chart in Fig. 2. The implications of this would be that data collected as part of the external pilot trial would be included in the definitive trial data set. Issues that then need to be considered could include whether there have been any external contextual changes in the time lapse between completion of the pilot trial and start of the definitive trial, and the effect these might have preferentially on either arm. All of these options would need to be prespecified in the protocol and in the interests of transparency also included in other documentation such as participant information and consent. In the longer term, clear guidance on all of this is required.

There are other aspects of the optimal design and conduct of feasibility studies, which have not been discussed in this paper but are mentioned here as examples of further work required to inform methodological guidance. In line with the CONSORT extension, studies should have in place progression criteria (Checklist item 22a), yet recent work by our group suggests just under a fifth of pilot study protocols include progression criteria [41]. Progression criteria should ideally be facilitative and used to inform successful trial completion, not as a tool to stop a trial. These may be particularly challenging for an internal pilot trial where the potential for further change is limited. If recruitment is slower than expected, what is the extent of change that can be made whilst retaining the internal pilot data? If intervention fidelity is poor, can any changes be made, for example more training provided, and differences accounted for in analyses? If recruitment is slow can eligibility criteria be relaxed, the timeline extended or the number of sites increased? Can secondary outcomes be changed or reduced or data collected differently or at a different time interval in order to improve response, as long as the primary outcome is unchanged?

There is also a lot of current debate on the correct basis for determining the pilot study sample size with consensus that there will rarely be a single right answer. However, whatever sample size is chosen, and how it is chosen, balancing all the competing factors, it must be scientifically justified. For if the study is not designed with an appropriate sample size, it is unlikely that its findings will be valid. Lewis et al. have begun to address sample size for process outcomes to inform progression criteria in pilot and feasibility studies [42].

Finally, there is the ethical question of the conditions of the participant’s informed consent. In the strictest terms of transparency under GDPR, participants consenting to take part in a randomised pilot trial should be clearly informed how their data will be used. A review of 184 studies submitted to a Canadian Research Ethics Committee suggests the transparency of informed consent in PAFS is inadequate and needs to be specifically addressed by research ethics guidelines [43]. This is not just about having the words pilot or feasibility in the title. For an external pilot trial, it should be explicit that the findings of the study will only be used as part of the research development process and not part of a data set used to provide definitive evidence. The implications of this transparency, however, have not yet been fully debated. For, if the purpose of the pilot trial is to (say) assess recruitment rates, how valid is the pilot with respect to the main trial if people are told there is a different purpose to the research? In the interests of research and public good is it justified to deceive? For participants recruited to an internal pilot trial, what should they be told? Our early work suggests just under a fifth of studies declare their pilot or feasibility objectives in the Participant Information Sheet [43]. Changing an external pilot to an internal pilot—one of the options we speculate on above—might also have implication for informed participant consent, GDPR and the way the data is used. This is another area requiring exploration.

Conclusion

In conclusion, in this paper, we have revisited our rationale requiring that pilot and feasibility studies should be clearly defined and recognised as study designs in their own right. We have now fully integrated internal and external pilot studies into our model and considered their continuous as opposed to dichotomous relationship. In this final discussion, we have raised awareness of issues common to both internal and external studies which are in themselves current uncertainties in the context of optimum research design. Resolving all of these would contribute to the delivery of research which is more ethical, rigorous, efficient and above all robust in its findings.

Availability of data and materials

Data sharing is not applicable to this article as no datasets were generated or analysed during the current study.

References

  1. Eldridge SM, Lancaster G, Campbell M, Thabane L, Hopewell S, Coleman C, et al. Defining feasibility and pilot studies in preparation for randomised controlled trials: using consensus methods and validation to develop a conceptual framework. PLOSONE. 2016;11(3):e0150205. https://doi.org/10.1371/journal.pone.0150205 March 15, 2016.

    Article  CAS  Google Scholar 

  2. National Institute for Health Research. Guidance on applying for feasibility studies 2021. Updated 01/02/21. Available from: https://www.nihr.ac.uk/documents/nihr-research-for-patient-benefit-rfpb-programme-guidance-on-applying-for-feasibility-studies/20474 Accessed 18/05/21

  3. Health Research Board. Definitive Interventions and Feasibility Awards (DIFA) Guidance Notes 2020 Updated 25/05/20. Available from: https://www.hrb.ie/fileadmin/2._Plugin_related_files/Funding_schemes/DIFA_2020_Full_Application_Guidance_Notes.pdf Accessed 01/10/2020

  4. Lancaster GA, Dodd S, Williamson PR. Design and analysis of pilot studies: recommendations for good practice. J Eval Clin Pract. 2004;10(2):307–12.

    Article  PubMed  Google Scholar 

  5. Thabane L, et al. A tutorial on pilot studies: the what, why and how. BMC Med Res Methodol. 2010;10:1.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Arain M, et al. What is a pilot or feasibility study? A review of current practice and editorial policy. BMC Med Res Methodol. 2010;10:67.

    Article  PubMed  PubMed Central  Google Scholar 

  7. National Institute for Health Research. NIHR Evaluation, Trials, and Studies |Glossary 2015 Available http://www.nets.nihr.ac.uk/glossary/feasibility-studies. Accessed 17 Mar 2015

  8. Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M. Medical Research Council Guidance. Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ. 2008;337:a1655. https://doi.org/10.1136/bmj.a1655.

  9. Campbell MJ, Lancaster GA, Eldridge SM. A randomised controlled trial is not a pilot trial simply because it uses a surrogate endpoint. Pilot Feasib Stud. 2018;4:130. https://doi.org/10.1186/s40814-018-0324-2.

    Article  CAS  Google Scholar 

  10. Thabane L, Hopewell S, Lancaster GA, Bond CM, Coleman CL, Campbell MJ, et al. Methods and processes for development of a CONSORT extension for reporting pilot randomized controlled trials. Pilot Feasibility Stud. 2016;2:25. https://doi.org/10.1186/s40814-016-0065-z.

  11. NIHR Glossary Glossary | NIHR Accessed 12.1.22

  12. Pilot and Feasibility Studies | Home page (biomedcentral.com) Accessed 12.1.22

  13. Eldridge Sandra M, Chan Claire L, Campbell Michael J, Bond Christine M, Sally H, Lehana T, et al. CONSORT 2010 statement: extension to randomised pilot and feasibility trials. BMJ. 2016;355:i5239.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  14. Eldridge SM, Chan CL, Campbell MJ, et al. CONSORT 2010 statement: extension to randomised pilot and feasibility trials. Pilot Feasibility Stud. 2016;2:64. https://doi.org/10.1186/s40814-016-0105-8.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Moore L, Hallingberg B, Wight D, et al. Exploratory studies to inform full-scale evaluations of complex public health interventions: the need for guidance. J Epidemiol Community Health. 2018;72:865–6.

    Article  PubMed  Google Scholar 

  16. Avery KNL, Williamson PR, Gamble C, members of the Internal Pilot Trials Workshop supported by the Hubs for Trials Methodology Research, et al. Informing efficient randomised controlled trials: exploration of challenges in developing progression criteria for internal pilot studies. BMJ Open. 2017;7:e013537. https://doi.org/10.1136/bmjopen-2016-013537.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Morgan B, Hejdenberg J, Hinrichs-Krapels S, Armstrong D. Do feasibility studies contribute to, or avoid, waste in research? PLoS One. 2018;13(4):e0195951. https://doi.org/10.1371/journal.pone.0195951 Published 2018 Apr 23.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  18. Hallingberg B, Turley R, Segrott J, et al. Exploratory studies to decide whether and how to proceed with full-scale evaluations of public health interventions: a systematic review of guidance. Pilot Feasibility Stud. 2018;4:104. https://doi.org/10.1186/s40814-018-0290-8.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Skivington K, Matthews L, Simpson SA, Craig P, Baird J, Blazeby JM, et al. Framework for the development and evaluation of complex interventions: gap analysis, workshop and consultation-informed update. Health Technol Assess. 2021;25(57).

  20. Skivington K, et al. A new framework for developing and evaluating complex interventions: update of Medical Research Council guidance. BMJ. 2021;374:n2061.

    Article  PubMed  PubMed Central  Google Scholar 

  21. Desplenter F, Bond CM, Watson M, Burton C, Murchie P, Lee AJ, et al. Incidence and drug treatment of emotional distress after cancer diagnosis: a matched primary care case-control study. Br J Cancer. 2012;107(9):1644–51. https://doi.org/10.1038/bjc.2012.364. Epub 2012 Oct 11.

  22. Lancaster GA, Thabane L. A guide to the reporting of non-randomised pilot and feasibility studies. Pilot Feasib Stud. 2019;5:114. https://doi.org/10.1186/s40814-019-0499-1.

    Article  Google Scholar 

  23. Lund H, Brunnhuber K, Juhl C, Robinson K, Leenaars M, Dorch BF, et al. Towards evidence based research. BMJ. 2016;355:i5440. https://doi.org/10.1136/bmj.i5440.

    Article  PubMed  Google Scholar 

  24. Moore SA, Avery L, Price CIM, Flynn D. A feasibility, acceptability and fidelity study of a multifaceted behaviour change intervention targeting free-living physical activity and sedentary behaviour in community dwelling adult stroke survivors. Pilot Feasib Stud. 2020;6:58.

    Article  Google Scholar 

  25. Inch J, Notman F, Bond C, Alldred D, Arthur A, Blyth A, et al. The Care Home Independent Prescribing Pharmacist Study (CHIPPS)-a non-randomised feasibility study of independent pharmacist prescribing in care homes. Pilot Feasib Stud. 2019;5:89. https://doi.org/10.1186/s40814-019-0465-y#citeas.

    Article  Google Scholar 

  26. O’Dowd H, Beasant L, Ingram J, Montgomery A, Hollingworth W, Gaunt D, et al. The feasibility and acceptability of an early intervention in primary care to prevent chronic fatigue syndrome (CFS) in adults: randomised controlled trial. Pilot Feasib Stud. 2020;6:65.

    Article  Google Scholar 

  27. Donald Rumsfeld - There are known knowns. These are things… (brainyquote.com) Accessed 12.1.22

  28. Cooper CL, Whitehead A, Pottrill E, Julious SA, Walters SJ. Are pilot trials useful for predicting randomisation and attrition rates in definitive studies. A review of publicly funded trials. Clin Trials. 2018;15(2):189–96.

    Article  PubMed  PubMed Central  Google Scholar 

  29. Williams NH, Hawkes C, Din NU, et al. Fracture in the Elderly Multidisciplinary Rehabilitation (FEMuR): study protocol for a phase II randomised feasibility study of a multidisciplinary rehabilitation package following hip fracture [ISRCTN22464643]. Pilot Feasibility Stud. 2015;1:13. https://doi.org/10.1186/s40814-015-0008-0.

    Article  PubMed  PubMed Central  Google Scholar 

  30. Farrin A, Russell I, Torgerson D, Underwood M. Differential recruitment in a cluster randomized trial in primary care: the experience of the UK Back pain, Exercise, Active management and Manipulation (UK BEAM) feasibility study. Clin Trials. 2005;2(2):119–24. https://doi.org/10.1191/1740774505cn073oa.

    Article  PubMed  Google Scholar 

  31. MacInnes D, Kinane C, Beer D, et al. Study to assess the effect of a structured communication approach on quality of life in secure mental health settings (Comquol): study protocol for a pilot cluster randomized trial. Trials. 2013;14:257. https://doi.org/10.1186/1745-6215-14-257.

    Article  PubMed  PubMed Central  Google Scholar 

  32. Hill JC, Garvin S, Chen Y, et al. Stratified primary care versus non-stratified care for musculoskeletal pain: findings from the STarT MSK feasibility and pilot cluster randomized controlled trial. BMC Fam Pract. 2020;21:30. https://doi.org/10.1186/s12875-019-1074-9.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  33. McDonald AM, Knight RC, Campbell MK, Entwistle VA, Grant AM, Cook JA, et al. What influences recruitment to randomised controlled trials? A review of trials funded by two UK funding agencies. Trials. 2006;7:9. https://doi.org/10.1186/1745-6215-7-9 PMID: 16603070; PMCID: PMC1475627.

    Article  PubMed  PubMed Central  Google Scholar 

  34. Walters SJ, dos Anjos B, Henriques-Cadby I, Bortolami O, et al. Recruitment and retention of participants in randomised controlled trials: a review of trials funded and published by the United Kingdom Health Technology Assessment Programme. BMJ Open. 2017;7:e015276. https://doi.org/10.1136/bmjopen-2016-015276.

    Article  PubMed  PubMed Central  Google Scholar 

  35. Chalmers I, Bracken MB, Djulbegovic B, Garattini S, Jonathan Grant A, Gülmezoglu M, et al. Research: increasing value, reducing waste 1 how to increase value and reduce waste when research priorities are set. Lancet. 2014;383:156–65.

    Article  PubMed  Google Scholar 

  36. Salman RA-S, Beller E, Kagan J, Hemminki E, Phillips RS, Savulescu J, et al. Research: increasing value, reducing waste 3 increasing value and reducing waste in biomedical research regulation and management. Lancet. 2014;383:176–85.

    Article  PubMed Central  Google Scholar 

  37. Nasser M, Clarke M, Chalmers I, Brurberg KG, Nykvist H, Lund H, et al. What are funders doing to minimise waste in research? Lancet. 2017;389(10073):1006–7. https://doi.org/10.1016/S0140-6736(17)30657-8 PMID: 28290987.

    Article  PubMed  Google Scholar 

  38. Programme Grants for Applied Research | NIHR Accessed 3 Mar 2022

  39. Types of Grant Programs | grants.nih.gov Accessed 3 Mar 2022

  40. Canadian Institutes of Health Research https://cihr-irsc.gc.ca/e/193.html Accessed 3 Mar 2022

  41. Mellor K, Eddy S, Peckham N, et al. Progression from external pilot to definitive randomised controlled trial: a methodological review of progression criteria reporting. BMJ Open. 2021;11(6):e048178. https://doi.org/10.1136/bmjopen-2020-048178.

    Article  PubMed  PubMed Central  Google Scholar 

  42. Lewis M, Bromley K, Sutton CJ, et al. Determining sample size for progression criteria for pragmatic pilot RCTs: the hypothesis test strikes back! Pilot Feasibility Stud. 2021;7:40. https://doi.org/10.1186/s40814-021-00770-x.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  43. Khan MI, Mbuagbaw L, Holek M, et al. Transparency of informed consent in pilot and feasibility studies is inadequate: a single-center quality assurance study. Pilot Feasibility Stud. 2021;7:96. https://doi.org/10.1186/s40814-021-00828-w.

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

No specific funding was received for this work.

Author information

Authors and Affiliations

Authors

Contributions

CB, GL, SE, LT, MC, SH and CC conceived the idea for this paper. CB led the writing of the paper supported by GL and SE, and all authors contributed to the drafting of the paper and iteratively revising it. All authors have approved the submitted version and agreed both to be personally accountable for the author’s own contributions and to ensure that questions related to the accuracy or integrity of any part of the work, even ones in which the author was not personally involved, are appropriately investigated, resolved and the resolution documented in the literature.

Authors’ information

CB, CC, SE, SH, GL, MC and LT are the authors of the original conceptual paper and subsequent CONSORT extension 10,13,14.

Corresponding author

Correspondence to Christine Bond.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

GL and LT are editors in chief, CC is an associate editor and CB, MC and SE are on the Editorial Board of Pilot and Feasibility Studies. Other authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bond, C., Lancaster, G.A., Campbell, M. et al. Pilot and feasibility studies: extending the conceptual framework. Pilot Feasibility Stud 9, 24 (2023). https://doi.org/10.1186/s40814-023-01233-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40814-023-01233-1

Keywords