Skip to main content


Methods and processes for development of a CONSORT extension for reporting pilot randomized controlled trials

Article metrics

The Erratum to this article has been published in Pilot and Feasibility Studies 2016 2:35



Feasibility and pilot studies are essential components of planning or preparing for a larger randomized controlled trial (RCT). They are intended to provide useful information about the feasibility of the main RCT—with the goal of reducing uncertainty and thereby increasing the chance of successfully conducting the main RCT. However, research has shown that there are serious inadequacies in the reporting of pilot and feasibility studies. Reasons for this include a lack of explicit publication policies for pilot and feasibility studies in many journals, unclear definitions of what constitutes a pilot or feasibility RCT/study, and a lack of clarity in the objectives and methodological focus. All these suggest that there is an urgent need for new guidelines for reporting pilot and feasibility studies.


The aim of this paper is to describe the methods and processes in our development of an extension to the Consolidated Standards of Reporting Trials (CONSORT) Statement for reporting pilot and feasibility RCTs, that are executed in preparation for a future, more definitive RCT.


There were five overlapping parts to the project: (i) the project launch—which involved establishing a working group and conducting a review of the literature; (ii) stakeholder engagement—which entailed consultation with the CONSORT group, journal editors and publishers, the clinical trials community, and funders; (iii) a Delphi process—used to assess the agreement of experts on initial definitions and to generate a reporting checklist for pilot RCTs, based on the 2010 CONSORT statement extension applicable to reporting pilot studies; (iv) a consensus meeting—to discuss, add, remove, or modify checklist items, with input from experts in the field; and (v) write-up and implementation—which included a guideline document which gives an explanation and elaboration (E&E) and which will provide advice for each item, together with examples of good reporting practice. This final part also included a plan for dissemination and publication of the guideline.


We anticipate that implementation of our guideline will improve the reporting completeness, transparency, and quality of pilot RCTs, and hence benefit several constituencies, including authors of journal manuscripts, funding agencies, educators, researchers, and end-users.


Feasibility and pilot studies are essential components of planning or preparing for a larger randomized controlled trial (RCT). They are intended to provide useful information about the feasibility of running the main RCT [1, 2] with the goal of reducing uncertainty and thereby increasing the chance of successfully conducting the main RCT. They are also useful preliminary studies that other researchers can learn from when developing their own study designs to enhance their approach or avoid similar pitfalls. However, many journals do not have a specific publication policy for these types of study or consider them a priority, and it has been shown that there are serious inadequacies in how pilot and feasibility studies are reported [16]. First, the dearth of published research describing pilot/feasibility studies suggests that only a minority of them actually reach publication; additionally, when they are published, a wide variety of terms are used to describe them. Second, only a small percentage of this minority of published pilot and feasibility studies explicitly state that they are intended as preparation for a larger RCT. Third, in many instances, the objectives of pilot and feasibility studies are unclear or are mis-specified as being the same as in the main RCT. Lastly, methodological features of pilot and feasibility studies are often inappropriately reported in the same format as in the main RCT. Reasons for these deficiencies include not only the lack of explicit publication policies for pilot and feasibility studies in many journals [3, 4] but also unclear definitions of what actually constitutes a pilot or feasibility RCT/study [2, 6] and confusion about what the objectives and methodological focus ought to be in such studies [13]. All these suggest that there is an urgent need for the development of new guidelines for reporting of pilot and feasibility studies. Given the importance of these studies in preparing for future definitive trials, we anticipate that the guideline will help to address the prevailing flaws in their conduct and reporting, leading to superior-quality pilot trials and enhanced feasibility for the main RCTs.

Initial discussions on developing reporting guidelines for feasibility and pilot studies occurred at the annual scientific meeting of the Society for Academic Primary Care (SAPC) held in Bristol (UK) on June 6–8, 2011, during a workshop on “Pilot and feasibility studies: How best to obtain pre-trial information and publish it”, organized and led by Sandra Eldridge, Gillian Lancaster, and Sally Kerry and attended by Christine Bond. The workshop was intended to clarify the aims of pilot and feasibility studies, improve understanding of the particular requirements of these studies (including specification of their key objectives), and discuss how to report them appropriately. It was proposed (after the workshop) that reporting guidelines for feasibility and pilot studies would be helpful as a template for researchers, reviewers, and editors to use when preparing or reviewing papers for publication; they would also provide guidance to funders and policy-makers who review grant or funding proposals for feasibility and pilot studies.

Arising from this workshop, SE, GL, and CB engaged other collaborators in the area (MJC, LT, SH), and the group embarked together on a programme of research focusing on the reporting of feasibility and pilot studies. This paper reports part of that work: the methods and processes used in the development of a consolidated standards of reporting trials (CONSORT) extension guideline for reporting randomized pilot and feasibility studies. The guideline focuses on reporting of pilot and feasibility RCTs, using the 2010 CONSORT Statement [7] as the starting point. The original CONSORT guideline aimed to improve the reporting of two-arm parallel group RCTs [8] and was later extended to cover other types of designs: cluster randomized trials [9], non-inferiority and equivalence trials [10], pragmatic trials [11], and N-of-1 trials [12]. A variety of clinical areas has also been discussed, including the following interventions: herbal medicinal [13], non-pharmacological [14], and acupuncture [15]. Finally, related types of data have been described, including the following: patient-reported outcomes [16], harms [17], abstracts [18], and RCT protocols [19].


The aim of this paper is to describe the methods and processes for development of our CONSORT extension for reporting feasibility and pilot RCTs that are executed in preparation for a future definitive large-scale RCT. Reporting guidance for other types of pilot and feasibility studies—which include non-randomized and qualitative pilot and feasibility studies—will be a focus of future work.

Methods and processes

We followed previously recommended methods and processes for developing, disseminating, and implementing consensus reporting guidelines [2022]. Briefly, these included a series of activities: (1) project launch—which included establishing the working group, identifying the need for the guideline, performing a literature review of current practice, and drafting an initial list of items and starting to seek funding support for the project; (2) engagement with stakeholders—which included identifying potential participants for a Delphi study and face-to-face consensus meeting and early presentations of potential items to gain feedback at conferences and workshops; (3) conducting the Delphi study, including a pilot Delphi and set-up of the questions online and presentation of the results at a methodology meeting of trialists; (4) a consensus meeting, to present the results of the literature review and the Delphi study and to discuss the revised list of checklist items; (5) write-up and implementation including creating the guideline, addressing feedback from users, establishing the explanation and elaboration (E&E) document, and devising a publication strategy; and (6) post-publication activities—which covers encouraging guideline uptake and endorsement, updating the guideline, and evaluating impact. These methods or their variations and adaptations have been used in development of other similar guidelines [7, 920]. Figure 1 illustrates the five parts of the development process for this guideline: (i) the project launch, (ii) stakeholder engagement, (iii) a modified Delphi process, (iv) a consensus meeting, and (v) write-up and implementation. It does not yet include post-publication activities where we will engage in dissemination and endorsement, as we have not yet reached this stage. The development process is described below. But it should be noted, however, that the process was iterative—repeating some part(s) as necessary—rather than linear.

Fig. 1

Development of the CONSORT extension to pilot trials guideline

Part 1: project launch

Establishing a working group

After the SAPC meeting in June 2011, our (SE, GL, CB) first step was to establish a working group to lead the process of developing the reporting guidelines. The core makeup of the group included people with experience in conducting and publishing methodological work on pilot and feasibility RCTs (CB, SE, GL, LT, MC) and methodologists and statisticians with expertise in the design and reporting of RCTs and in reviewing funding or ethics applications (SE, GL, LT, MC). A member of the CONSORT group (SH) was invited and agreed to join the group during the Delphi process. The group communicated regularly throughout the process via a number of face-to-face and virtual meetings (by teleconference or Skype) and email discussions. CC joined the group when she was appointed to a National Institute of Heath Research (NIHR) research methods fellowship focusing on pilot studies supervised by SE in 2013.

Review of the literature

We first reviewed the literature to assess the quality of pilot and feasibility studies that had been published in major medical journals (Lancet, the BMJ, the New England Journal of Medicine, JAMA). These are amongst the leading medical journals that have been included in previous systematic reviews or surveys of the reporting of pilot studies [1, 3]. We included articles identified in the searches by Lancaster et al. [1] and Arain et al. [3], as well as examples used in previous workshops conducted by some of the working group members. We assessed whether clear statements had been made with respect to the following: whether the article concerned a pilot or feasibility study; feasibility objectives; and whether they stated that the feasibility or pilot study was in preparation for a larger RCT. We also reviewed existing definitions of and reporting guidelines for pilot and feasibility studies [1, 2, 23].

Part 2: stakeholder engagement

We engaged several groups of stakeholders in the process:

  • The CONSORT group: As noted earlier, the guideline was developed with involvement of the CONSORT group. As with other CONSORT-related guidelines, the inclusion of a CONSORT Group member (SH) was intended to ensure consistency in the use of recommended methods in the development, dissemination, and implementation of high quality reporting guidelines [24].

  • Clinical Trials Community: Our first engagement of the clinical trials community was at the Annual Meeting of the Society for Clinical Trials in Boston in May 2012, where we presented early work from the Delphi study (see later). This was organized as an invited session at the meeting, and there were about 40 attendees. The presentation focused on problems in the reporting of pilot and feasibility studies and the need to develop guidance to improve the situation [25].

A second opportunity to engage a larger clinical trials community was at the 2nd Clinical Trials Methodology Conference in November 2013 in Edinburgh, Scotland. This was an open session, and the discussion focused mainly on the definitions of pilot and feasibility studies (see “Part 3: the Delphi process” section later).

Over the course of the project, we have also delivered a number of workshops and talks on feasibility and pilot studies and sought feedback from the research community. Overall, the reactions have supported the idea of developing a CONSORT-type reporting guidelines for feasibility and pilot studies. We recognize that there are differences of opinion about the definitions of these studies, particularly as they reflect various user groups and theoretical perspectives, as we report elsewhere [26].

  • Journal Editors and Publishers: We engaged editors of prominent journals known to published pilot and feasibility studies including the BMJ Open, Journal of Clinical Epidemiology, Clinical Trials, and BMC Trials. The selection of the journal editors was pragmatic: (i) our knowledge of publication of pilot and feasibility studies led us to believe that these journal editors would be interested in the work; (ii) the working group members were already serving on the editorial boards of some of these journals and had sent out personal invitations to the editors; and (iii) these editors were available to attend the consensus meeting. We also engaged several publishers including BioMed Central and the BMJ. We will continue to engage other journal editors and publishers to ensure wider awareness and endorsement of our guideline upon their completion. It has been shown that formal endorsement of a guideline by journals is a strong determinant of its adoption and subsequent adherence to it [27].

  • Funders: We approached several funding agencies, including the Medical Research Council UK, the Canadian Health Research Institutes, and the Chief Scientist Office, Scottish Government, to engage them in providing financial support for the development of the guideline. These are amongst the major agencies that have financially supported pilot studies through the national or international funding competitions. Our own project was subsequently funded in part by the Queen Mary University of London, the University of Sheffield, the Chief Scientist Office in Scotland, the NIHR Research Design Services London and South East and the NIHR Statisticians Network.

The project was registered on the Enhancing the QUAlity and Transparency Of health Research (EQUATOR) Network website [28].

Part 3: the Delphi process

The Delphi process is iterative and provides a structured collection of input, information, and feedback from participants by using a series of survey questionnaires. Typically, each questionnaire is refined based on comments from previous iterations [29]. Generally considered to be one of the central features in developing reporting guidelines [21, 22], the Delphi method has been widely used in this way [7, 920]. The objectives of our Delphi process were (a) to evaluate the agreement of the various participants (approximately 100 stakeholders in total, including trialists, methodologists, and statisticians) with respect to our initial definitions of feasibility and pilot studies; (b) to assess their agreement on two reporting checklists, one for pilot studies and the other for feasibility studies (see rationale below), with the initial items being based on the current (2010) CONSORT Statement [7]; (c) to elicit any further items or changes to items that the participants thought might be important; and (d) to identify which items the Delphi participants felt were the most important. We received research ethics approval for the Delphi study from the University of Sheffield Research Ethics Committee.

Personal networks allowed us to identify individual participants who were involved in, or interested in, pilot and feasibility studies. We also sent invitations to people on contact lists of the following: Canadian Institutes of Health Research, the Biostatistics Section of the Statistical Society of Canada, the American Statistical Society, UK NIHR, Medical Research Council (MRC) Methodology Hubs, the International Society for Clinical Biostatistics, the UK Clinical Trials Units Network, the Society for Clinical Trials, and the World Association of Medical Editors.

Based on the literature review described earlier, the current CONSORT Statement [7], and the mutually exclusive definitions of pilot and feasibility studies articulated by the UK National Institute of Health Research, we created two versions of the checklist—one for feasibility studies and the second for pilot studies (Table 1).

Table 1 Drafts of the initial proposed checklists for feasibility and pilot studies

The Delphi survey comprised three sections: participant demography, feedback on the NIHR and MRC definitions of feasibility and pilot studies [23, 26], and the two checklist items (Table 1). The survey was produced in an online format using CLINVIVO software and distributed through a weblink sent in an email.

The Delphi process was carried out between June and October 2013. It was done in two phases. Phase 1 was the pilot phase. We piloted the Delphi survey using a “think-aloud” approach on 13 colleagues working at our own institutions. The purpose of this phase was to evaluate the questionnaire for face and content validity, usability, and clarity of its items.

Phase 2 was the main Delphi study, which was conducted in two rounds using an online survey conducted by Clinvivo.

  1. i.

    First round: We invited participants to declare an interest in the study to Clinvivo by emailing Clinvivo then replied by sending a personalized link to the round. The round opened on July 26, 2013 and closed on August 27, 2013. Invitations were sent to all those who had expressed an interest up to and including August 26.

    The two checklists are presented in Table 1. Most items were identical or similar in the two lists. The participants rated items on a nine-point scale, ranging from 1 = “not at all appropriate” to 9 = “completely appropriate”. Participants could also write comments under each rating (Fig. 2).

    Fig. 2

    An example of Delphi presentation for the two checklists

    Overall, 93/100 participants responded to the online survey. Table 2 provides a summary of the demographic characteristics of the respondents.

    Table 2 The demographic characteristics of the main Delphi study: n = 93
  2. ii.)

    2nd round: Participants who had completed the first round were sent an email link to the second round on September 24. Reminders to complete was sent on October 7 and 14. The round closed on midnight, at the International Date Line, on October 15.

    In this round, participants were asked to review tables of the scores of histograms of pilot and feasibility items for which 70 % or more of the panel had rated using the two highest appropriateness scores (i.e. 8 and 9). They could then make additional comments on these items. Participants were also asked to review the remaining items that had been rated slightly lower in terms of appropriateness, and for each item, they were asked to indicate whether they thought the item should be kept, discarded, or whether they were unsure or had no opinion.

    Participants were also asked whether they thought any reporting aspects had been missed and should be included in a checklist, and separately whether a pilot study checklist would be suitable for phase I studies, phase II studies, internal pilots, and external pilots; and whether a feasibility checklist would be suitable in the context of qualitative work, and what they would regard as quantitative work. In each case, participants could rate items as suitable, unsure/no opinion, or not suitable. Finally, participants could add comments on any other aspects about the suitability of pilot and feasibility study checklists. About 85 % (79/93) of the respondents participated in the second round.

Overall, the Delphi results showed a strong agreement on checklist items both for pilot and feasibility studies. However, there was substantial disagreement about the definitions of pilot and feasibility studies and the distinction between them.

At the Edinburgh meeting, we presented four propositions regarding definitions and preliminary Delphi results [26]. The main outcome from the discussions was that participants unanimously suggested that we should begin by developing only one reporting checklist. At this stage, it was unclear how wide the scope of this checklist would be, though a strong steer from the meeting was not to make it too wide.

Part 4: the consensus meeting

Prior to the consensus meeting: In February 2014, our group had a face-to-face meeting in London, UK, to review progress. Based on the feedback from all the stakeholder engagements and Delphi process results, we redrafted our definitions, with feasibility as an overarching term, and we agreed to focus on reporting guidelines for pilot/feasibility RCTs as our next step. We finalized the draft list of items for a reporting guideline to be discussed at the consensus meeting. At this stage, we also confirmed with the CONSORT group that our checklist would be included as an official CONSORT checklist extension.

Consensus meeting: We held a 2-day meeting in Oxford, UK, on October 27–28, 2014, to seek feedback on the proposed items to be included in the guideline, and its scope. We invited a group of international stakeholders (n = 26) representing different professional sectors (academic, pharmaceutical, journal editors, publishers, funding bodies) and different clinical RCT roles (such as trialists, methodologists, statisticians, and clinicians).

Using approaches that were similar to those used in previous consensus meetings for other guidelines [7, 920], participants were presented in advance of the meeting with the results of the literature review and the Delphi survey. Working group members presented the background and an update on work done to date, in order to facilitate the discussions. We also presented a penultimate version of the checklist—based on the Delphi process and feedback from the earlier stakeholder engagement meetings. The meeting was audiotaped, and formal minutes were subsequently prepared and circulated to all attendees.

The key recommendations that emerged were as follows:

  • Modify items: It was recommended that 24 items should be modified. These modifications were primarily to prefix all references to “trial” with “pilot” to clearly indicate that the information being reported is about the pilot RCT, and not the main RCT. As in previous CONSORT extensions [919], some of the recommended changes begin with “if relevant” or “when applicable”, to show that some information which authors are being asked to report might not be relevant or applicable for their particular pilot RCT.

  • Add new items: Four new items were suggested as follows. Participants: how participants should be identified and consented; outcomes: if applicable, criteria used to judge whether, or how, to proceed with a future definitive RCT; limitations: implications for progression from the pilot to a future definitive RCT, including any proposed amendments or modifications to the protocol of the main RCT; and other information: if ethical approval/research review committee approval was confirmed, with a reference number.

  • Remove items: It was recommended that, beyond an item our group had already suggested, removing one further item should be removed, Methods for additional analyses, such as subgroup analyses and adjusted analyses. Participants felt strongly that this item was not applicable to pilot RCTs, because such analyses would be about hypothesis testing or generation—which is not the focus of a pilot RCT.

Part 5: write-up, dissemination, and implementation

Following the consensus meeting, we continued to refine the content and wording of the items by virtual group discussion and by involving those who had attended the meeting, to ensure they reflected the decisions that had been made. There was second working group meeting in London, UK, on January 12, 2015. We discussed the feedback from the consensus meeting in detail and outlined strategies to complete the write-up of the guideline, including plans for dissemination and implementation in order to maximize its adoption by various journals, professional associations and the clinical trial community.

As with previous guidelines [7, 920], our guideline statement will be published with a detailed Explanation and Elaboration (E&E) document that will provide an in-depth explanation of the scientific rationale for each recommendation, together with an example of clear reporting for each item. We have sought feedback from the consensus conference participants on the E&E document to ensure that it accurately reflects the discussions and decisions that were made during the meeting. To widely disseminate the guideline, we will publish in peer-reviewed journals and do presentations and workshops at conferences and other venues. We also plan to seek endorsement of the guideline by journal editors. Research has shown that formal endorsement and adoption of the CONSORT Statement by journals is associated with improved quality of reporting [26]


This article has described the methods and processes that we have used to develop a CONSORT extension for reporting of pilot/feasibility RCTs—using the 2010 version of the CONSORT Statement as its basis [7]. The work actually began with a broader mandate, to develop guidelines for feasibility and pilot studies. However, after receiving feedback from the research community, we have started with a narrower focus of firstly developing a set of guidelines for reporting feasibility and pilot RCTs. We have attempted to use the best available and evidence-based methods [21, 22], similar to those used by other guideline developers [7, 920]. These include establishing a working group to lead the project; conducting a systematic review of the literature to determine current practice and identify available guidelines; applying an online Delphi survey on the initial list of items to be included in the guideline; holding a consensus meeting attended by various stakeholders to finalize the list; and creating a dissemination plan to enhance uptake for the guideline. In addition, because of the perceived differences of opinion about the definitions of feasibility and pilot studies, we found that an ongoing discussion amongst the research community over a considerable period was invaluable for validating the direction of our work.

In this paper, we have provided detailed descriptions of the methods and processes that we used to develop our guideline. These details are intended to provide readers with enough information to assess the quality and validity of the methods used to develop the CONSORT extension to pilot/feasibility RCTs guideline. We applied approaches, methods, and processes that had been used previously by guideline developers, to ensure that the foundation for and development of our own guideline was truly evidence-based. As with previous guidelines [7, 919], we involved a wide spectrum of stakeholders and participants representing different sectors, perspectives, areas of expertise, and experiences with trials—both in the Delphi process and the consensus meeting. The participants in the Delphi surveys included (bio)statisticians, clinicians, health services researchers, regulatory staff, primary care practitioners, to mention a few. A potential limitation is that the views of non-statisticians may not have been adequately represented since the majority of the participants were statisticians. The participants in the consensus meeting came from different stakeholder groups representing professional sectors (academic, pharma, journal editors, publishers, funding bodies) and different clinical RCT roles (such as trialists, methodologists, statisticians, and clinicians). Prior to the consensus meeting, we had several Skype and face-to-face discussions and presentations at several professional conferences, to gather data and feedback. These steps were preceded by an extensive review of the literature to assess the reporting of pilot and feasibility trials.

We also spent a considerable amount of time debating alternative definitions of feasibility and pilot trials, and we used the Delphi study along with discussions at conferences, and the expert consensus meeting, to get feedback on them. This led to the development of a framework, in which pilot studies are viewed as a subset of feasibility studies [26]. Within this framework, a feasibility study is defined as a study asking “whether something can be done, should we proceed with it, and if so, how” [26]. In contrast, “a pilot study asks the same questions but also has a specific design feature: in a pilot study a future study, or part of a future study, is conducted on a smaller scale” [26]. The framework and the resulting definitions became essential elements in the development process of the guideline.

We hope that our guideline will improve the reporting of pilot/feasibility RCTs. We have already liaised with editors of some key clinical journals, and we also plan to embark on a campaign to get more journals to endorse the guideline. The intent is to target several groups: authors of journal manuscripts, who can use it as an outline for reporting results of their pilot/feasibility RCTs; manuscript reviewers, who can use it as template to evaluate reports of pilot/feasibility RCTs; funding agencies, for use as a foundation to create funding programmes for and evaluation of pilot and feasibility RCT proposals; educators, for use as a tool for training students and researchers about the unique nature of pilot RCT methodology and reporting; and end-users, for use as a tool to identify relevant pilot RCTs—that provide evidence about feasibility to inform their planning of main RCTs or other pilot/feasibility RCTs.

Like all reporting guidelines, ours will require re-evaluation and revisions over time—to ensure that it is kept up to date with evolving research and knowledge on pilot and feasibility trials.

One major outcome of this work is the setting up of a new journal by BioMed Central, Pilot and Feasibility Studies (, which was launched on January 12, 2015. It provides a platform for publishing these types of studies and constitutes a much-needed place for researchers to share their work and ideas on all aspects of the design, conduct, and reporting of pilot and feasibility studies in health or biomedical research. While this is an important achievement on its own, we hope that our guideline will also be a catalyst for the establishment of better publication practices and editorial policies regarding the reporting of pilot and feasibility trials—a deficiency that has been noted previously [1, 3]. As of March 19, 2016, the new journal has received over 70,000 unique web accesses, published 61 papers, of which 34 are protocols, 21 report the results of pilot or feasibility studies/trials, and 6 are reviews, commentaries, or methods papers. These statistics suggest that investigators are indeed using the journal as an outlet for publishing their pilot or feasibility works.

Participants at the Oxford 2014 consensus meeting

Doug Altman1, Colin Begg2, Frank Bretz3, Marion Campbell4, Erik Cobo5 , Peter Craig6, Peter Davidson7, Trish Groves8, Freedom Gumedze9, Jenny Hewison10, Allison Hirst11, Pat Hoddinott12, Sallie Lamb13, Tom Lang14, Elaine McColl15, Alicia O’Cathain16, Daniel R Shanahan17, Chris Sutton18, Peter Tugwell19,

Working group

Christine Bond20, Michael Campbell21, Claire Coleman22, Sandra Eldridge22, Sally Hopewell23, Gillian Lancaster24, Lehana Thabane25

  1. 1.

    Centre for Statistics in Medicine, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford, Oxford, UK

  2. 2.

    Department of Epidemiology and Biostatistics, Memorial Sloan Kettering Cancer Center, New York, NY, USA

  3. 3.

    Statistical Methodology and Consulting, Novartis Pharma AG, Basel, Switzerland

  4. 4.

    Health Services Research Unit, University of Aberdeen, Polwarth Building, Foresterhill, Aberdeen, UK

  5. 5.

    Department of Statistics and Operations Research, UPC, Barcelona-Tech, Spain

  6. 6.

    MRC/CSO Social and Public Health Sciences Unit, University of Glasgow, Glasgow, UK

  7. 7.

    National Institute for Health Research, Evaluation, Trials, and Studies Coordinating Centre, University of Southampton, Southampton, UK

  8. 8.

    BMJ, BMA House, London, UK

  9. 9.

    Department of Statistical Sciences, University of Cape Town, Cape Town, South Africa

  10. 10.

    Leeds Institute of Health Sciences, University of Leeds, Leeds, UK

  11. 11.

    IDEAL Collaboration, Nuffield Department of Surgical Sciences, University of Oxford, Oxford, UK

  12. 12.

    Nursing Midwifery and Allied Health Professionals Research Unit, Faculty of Health Sciences and Sport, University of Stirling, Scotland, UK

  13. 13.

    Oxford Clinical Trials Research Unit, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford, Oxford, UK

  14. 14.

    Tom Lang Communications and Training International, Kirkland, Washington, USA

  15. 15.

    Newcastle Clinical Trials Unit and Institute of Health & Society, Newcastle University, Newcastle Upon Tyne, UK.

  16. 16.

    School of Health and Related Research, University of Sheffield, Sheffield, UK

  17. 17.

    BioMed Central, London, UK

  18. 18.

    School of Health, University of Central Lancashire, Preston, UK

  19. 19.

    Department of Medicine, University of Ottawa, Ottawa, Ontario, Canada

  20. 20.

    Centre of Academic Primary Care, University of Aberdeen, Aberdeen, Scotland, UK

  21. 21.

    School of Health and Related Research, University of Sheffield, Sheffield, South Yorkshire, UK

  22. 22.

    Centre for Primary Care and Public Health, Queen Mary University of London, London, UK

  23. 23.

    Oxford Clinical Trials Research Unit, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford, Oxford, UK

  24. 24.

    Department of Mathematics and Statistics, Lancaster University, Lancaster, Lancashire, UK

  25. 25.

    Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Ontario, Canada


  1. 1.

    Lancaster GA, Dodd S, Williamson PR. Design and analysis of pilot studies: recommendations for good practice. J Eval Clin Pract. 2004;10(2):307–12.

  2. 2.

    Thabane L, Ma J, Chu R, Cheng J, Ismaila A, Rios LP, et al. A tutorial on pilot studies: the what, why and how. BMC Med Res Methodol. 2010;10(1):1.

  3. 3.

    Arain M, Campbell MJ, Cooper CL, Lancaster GA. What is a pilot or feasibility study? A review of current practice and editorial policy. BMC Med Res Methodol. 2010;10(1):67.

  4. 4.

    Shanyinde M, Pickering RM, Weatherall M. Questions asked and answered in pilot and feasibility randomized controlled trials. BMC Med Res Methodol. 2011;11(1):117.

  5. 5.

    Loscalzo J. Pilot trials in clinical research: of what value are they? Circulation. 2009;119(13):1694–6.

  6. 6.

    Arnold DM, Burns KE, Adhikari NK, Kho ME, Meade MO, Cook DJ. The design and interpretation of pilot trials in clinical research in critical care. Crit Care Med. 2009;37(1):S69–74.

  7. 7.

    Schulz KF, Altman DG, Moher D. CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials. BMJ (Clinical research ed). 2010;340:c332.

  8. 8.

    Begg C, Cho M, Eastwood S, Horton R, Moher D, Olkin I, et al. Improving the quality of reporting of randomized controlled trials. The CONSORT statement. JAMA. 1996;276(8):637–9.

  9. 9.

    Campbell MK, Piaggio G, Elbourne DR, Altman DG. Consort 2010 statement: extension to cluster randomised trials. BMJ (Clinical research ed). 2012;345:e5661.

  10. 10.

    Piaggio G, Elbourne DR, Pocock SJ, Evans SJ, Altman DG, Group C. Reporting of noninferiority and equivalence randomized trials: extension of the CONSORT 2010 statement. JAMA. 2012;308(24):2594–604.

  11. 11.

    Zwarenstein M, Treweek S, Gagnier JJ, Altman DG, Tunis S, Haynes B, et al. Improving the reporting of pragmatic trials: an extension of the CONSORT statement. BMJ (Clinical research ed). 2008;337:a2390.

  12. 12.

    Vohra S, Shamseer L, Sampson M, Bukutu C, Schmid CH, Tate R, et al. CONSORT extension for reporting N-of-1 trials (CENT) 2015 Statement. BMJ (Clinical research ed). 2015;350:h1738.

  13. 13.

    Gagnier JJ, Boon H, Rochon P, Moher D, Barnes J, Bombardier C. Reporting randomized, controlled trials of herbal interventions: an elaborated CONSORT statement. Ann Intern Med. 2006;144(5):364–7.

  14. 14.

    Boutron I, Moher D, Altman DG, Schulz KF, Ravaud P. Methods and processes of the CONSORT Group: example of an extension for trials assessing nonpharmacologic treatments. Ann Intern Med. 2008;148(4):W-60-W-6.

  15. 15.

    MacPherson H, Altman DG, Hammerschlag R, Youping L, Taixiang W, White A, et al. Revised STandards for Reporting Interventions in Clinical Trials of Acupuncture (STRICTA): extending the CONSORT statement. PLoS Med. 2010;7(6):e1000261.

  16. 16.

    Calvert M, Blazeby J, Altman DG, Revicki DA, Moher D, Brundage MD, et al. Reporting of patient-reported outcomes in randomized trials: the CONSORT PRO extension. JAMA. 2013;309(8):814–22.

  17. 17.

    LOANNIDIS J, EVANS SJ, GØTZSCHE PC, O'NEILL RT, ALTMAN DG, SCHULZ K, et al. Better reporting of harms in randomized trials: an extension of the CONSORT statement. Ann Intern Med. 2004;141(10):781–8.

  18. 18.

    Hopewell S, Clarke M, Moher D, Wager E, Middleton P, Altman DG, et al. CONSORT for reporting randomized controlled trials in journal and conference abstracts: explanation and elaboration. PLoS Med. 2008;5(1):e20.

  19. 19.

    Chan A-W, Tetzlaff JM, Gøtzsche PC, Altman DG, Mann H, Berlin JA, et al. SPIRIT 2013 explanation and elaboration: guidance for protocols of clinical trials. BMJ (Clinical research ed). 2013;346:e7586.

  20. 20.

    Montgomery P, Grant S, Hopewell S, Macdonald G, Moher D, Michie S, et al. Protocol for CONSORT-SPI: an extension for social and psychological interventions. Implement Sci. 2013;8(1):99.

  21. 21.

    Moher D, Schulz KF, Simera I, Altman DG. Guidance for developers of health research reporting guidelines. PLoS Med. 2010;7(2):e1000217.

  22. 22.

    Simera I, Altman DG, Moher D, Schulz KF, Hoey J. Guidelines for reporting health research: the EQUATOR network’s survey of guideline authors. PLoS Med. 2008;5:869–74.

  23. 23.

    National Institute for Health Research. NIHR evaluation, trials and studies | Pilot studies 2015 [cited 2015 17 August]. Available from:

  24. 24.

    CONSORT Statement website. [Last accessed on 2015 17 August]. Available from:

  25. 25.

    Dolgin E. Publication checklist proposed to boost rigor of pilot trials. Nat Med. 2013;19(7):795–6.

  26. 26.

    Eldridge SM, Lancaster GA, Campbell MJ, Thabane L, Hopewell S, Coleman CL, et al. Defining feasibility and pilot studies in preparation for randomised controlled trials: development of a conceptual framework. PLoS ONE. 2016;11(3):e0150205.

  27. 27.

    Samaan Z, Mbuagbaw L, Kosa D, Borg Debono V, Dillenburg R, Zhang S, et al. A systematic scoping review of adherence to reporting guidelines in health care literature. J Multidiscip Healthc. 2013;6:169–88.

  28. 28.

    The EQUATOR network. [Last accessed on 2015 17 August]. Available from:

  29. 29.

    Murphy MK, Black NA, Lamping DL, McKee CM, Sanderson CFB, et al. Consensus development methods, and their use in clinical guideline development. Health Technol Assess. 1998;2:1–88.

Download references


CC was funded by a National Institute for Health Research (NIHR) Research Methods Fellowship. This article therefore presents independent research funded by the NIHR. The views expressed are those of the authors and not necessarily those of the NHS, the NIHR, or the Department of Health. We are grateful to Alicia O’Cathain (University of Sheffield, Sheffield, UK) and Pat Hoddinott (University of Stirling, Scotland, UK) for attending and providing valuable input at one of our Healthrow Airport meetings. We thank the Clinvivo staff for their services and support in conducting the Delphi study. We are grateful to Dr. Stephen Walter for proofreading the paper.

Author information

Correspondence to Lehana Thabane.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

ICMJE criteria for authorship read and met: SE, GL, and CB conceived the study supported by LT, SH, and MJC. SE led the overall development and execution. SE, MJC, GL, CB, LT, SH, and CC all contributed to various aspects of the empirical work. LT, SH, and MJC wrote the first draft of the manuscript. All authors participated in the international consensus meeting, commented on drafts of the manuscript, and have approved this manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark


  • CONSORT extension
  • Pilot randomized controlled trial
  • Reporting guideline


By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. Please note that comments may be removed without notice if they are flagged by another user or do not comply with our community guidelines.