Skip to main content

Applying mixed methods to pilot feasibility studies to inform intervention trials

Abstract

Background

Pilot feasibility studies serve a uniquely important role in preparing for larger scale intervention trials by examining the feasibility and acceptability of interventions and the methods used to test them. Mixed methods (collecting, analyzing, and integrating quantitative and qualitative data and results) can optimize what can be learned from pilot feasibility studies to prepare rigorous intervention trials. Despite increasing use of mixed method designs in intervention trials, there is limited guidance on how to apply these approaches to address pilot feasibility study goals. The purpose of this article is to offer methodological guidance for how investigators can plan to integrate quantitative and qualitative methods within pilot feasibility studies to comprehensively address key research questions.

Methods

We used an informal consensus-based process informed by key methodological resources and our team’s complementary expertise as intervention researchers and mixed methodologists to develop guidance for applying mixed methods to optimize what can be learned from pilot feasibility studies. We developed this methodological guidance as faculty in the Mixed Methods Research Training Program (MMRTP) for the Health Sciences (R25MH104660) funded by the National Institutes of Health through the Office of Behavioral and Social Science Research.

Results

We provide the following guidance for applying mixed methods to optimize pilot feasibility studies: (1) identify feasibility domain(s) that will be examined using mixed methods, (2) align quantitative and qualitative data sources for the domain(s) selected for mixing methods, (3) determine the timing of the quantitative and qualitative data collection within the flow of the pilot study, (4) plan integrative analyses using joint displays to understand feasibility, and (5) prepare to draw meta-inferences about feasibility and implications for the future trial from the integrated data.

Conclusions

By effectively integrating quantitative and qualitative data within pilot feasibility studies, investigators can harness the potential of mixed methods for developing comprehensive and nuanced understandings about feasibility. Our guidance can help researchers to consider the range of key decisions needed during intervention pilot feasibility testing to achieve a rigorous mixed methods approach generating enhanced insights to inform future intervention trials.

Peer Review reports

Key messages regarding feasibility

  • Despite increasing use of mixed method designs in intervention trials, there is limited guidance on how to apply these approaches to address pilot feasibility study goals.

  • We provide guidance for applying mixed methods to optimize pilot feasibility studies.

  • Our guidance can help researchers to consider the range of key decisions needed during intervention pilot feasibility testing to achieve a rigorous mixed methods approach generating enhanced insights to inform design of intervention trials.

Background

Intervention research in the health sciences encompasses the procedures used to develop, refine, and test the efficacy and effectiveness of an intervention on a clinical outcome [1, 2]. Intervention development is often an iterative, nonlinear process of piloting a version of an intervention, getting feedback from participants and collaborators to identify problems, implementing solutions to address problems, and repeating this cycle until the intervention and study procedures are determined to be feasible and acceptable [3, 4]. Pilot feasibility studies occur during intervention development or adaptation and are critical for informing decisions about whether and how to design rigorous efficacy and effectiveness trials [5]. The primary aim of pilot feasibility studies is to examine the feasibility of interventions and the methods used to evaluate them to answer the overarching question “Can it work?” prior to examining “Does it work?” [6].

While pilot feasibility studies are critical components of intervention development, adaptation, and testing [7,8,9], they can be challenging to design, implement, and interpret. Topic areas commonly addressed by pilot feasibility studies include acceptability, usability, appropriateness, practicality, adaptation, and implementation barriers and facilitators [10]. Within these areas, investigators focus on specific domains of feasibility including recruitment capability, randomization acceptability, data collection procedures and outcome measures, intervention delivery and participant acceptability, intervention adherence and safety, barriers and facilitators to implementing the study, and retention of participants in both the treatment and study [4, 11]. Examples of feasibility domains and brief definitions are listed in Table 1. Many factors can affect these feasibility domains, including providers’ and other professionals’ willingness and ability to assist with recruitment, participants’ time, capacity, and interest in completing assessments and participating in the intervention and whether the research team has the expertise, skills, space, and time to conduct the study [4]. Pilot feasibility testing becomes more complex when research is conducted with populations historically underrepresented in clinical trials [12, 13] and in low-resource settings that present significant organizational, cultural, and infrastructure challenges [14].

Table 1 Example feasible domains and brief definitions in pilot feasibility studies

Investigators often use pre-specified criteria to evaluate feasibility domains and determine whether or how to proceed with a future trial [15]; however, there has been limited guidance on what should be considered when formulating progression criteria [15]. Investigators commonly use quantitative metrics such as recruitment and retention rates to determine whether or not pre-specified feasibility criteria and milestones were met to signal that the research can advance to the next stage of testing [7, 13, 16]. However, applying binary indicators of feasibility provides limited information about why aspects of intervention or study procedures were or were not feasible and what improvements are needed to enhance feasibility [17].

Qualitative methods have been used within pilot studies to explore aspects of feasibility in more depth and from the perspective of key stakeholders, including individuals and organizations that have an interest in or are affected by the intervention [18]. O’Cathain and colleagues [19] provided methodological guidance for using a variety of qualitative methods alongside quantitative approaches in feasibility studies to explore uncertainties and optimize an intervention or trial procedures before conducting a fully powered trial. Despite the potential value of combining quantitative and qualitative methods to better capture the complexity of interventions and implementation contexts [20, 21], the use of more than one method further adds to the challenge of implementing and interpreting pilot feasibility studies.

Since O’Cathain and colleagues [19] published their guidance, there have been calls for intervention researchers to not only use quantitative and qualitative methods but to meaningfully integrate them within rigorous mixed methods approaches [22, 23].

This article extends previous literature by offering practical guidance for how investigators can plan to integrate quantitative and qualitative data within pilot feasibility studies to address key feasibility questions. The authors developed this guidance as faculty in the Mixed Methods Research Training Program (MMRTP) for the Health Sciences (R25MH104660). Our team was motivated to develop and disseminate guidance for applying mixed methods in pilot feasibility studies by observing that while many MMRTP scholars grasped the application of mixed methods in large-scale randomized controlled trials, they had difficulty effectively applying mixed methods designs to small-scale intervention development studies and expressed uncertainty about how to design for data integration in pilot studies given the challenges of numerous feasibility domains of interest. This article was developed in response to requests from MMRTP scholars and other investigators for practical guidance on this topic.

We used an informal consensus-based approach [24] in which all members of our team had equal input on and approved the methodological guidance we generated for applying mixed methods in pilot feasibility studies. Key methodological resources on pilot feasibility studies and mixed methods study designs combined with our team’s complementary experiences and expertise as intervention researchers and mixed methodologists informed the development of our guidance. Methodological resources were reviewed to identify considerations and advice for how to apply mixed methods within the context of intervention research and to identify the primary strategies recommended for achieving integration across mixed methods approaches.

We begin with a brief overview of mixed methods intervention research and then provide methodological guidance for considerations needed to apply mixed methods research in the context of pilot feasibility studies. We emphasize the importance of selecting, among many possible domains, the key domain(s) of feasibility that would benefit most from a mixed methods investigation as a way to help investigators stay focused and use limited resources efficiently in pilot feasibility studies. Specifically, we offer practical guidance for five planning considerations for using mixed methods in pilot feasibility studies, including recommended steps in the planning process and common pitfalls to avoid.

Brief overview of mixed methods intervention research

Mixed methods research is research in which the investigator intentionally integrates the questions, data sources, analysis procedures, and interpretations associated with both quantitative and qualitative research within a study or program of research [20, 25, 26]. The basic premise of mixed methods research is that the combination of quantitative results about magnitudes and relationships with qualitative results about experiences and meaning can produce enhanced insights about problems of interest [25]. Therefore, the intent for using a mixed methods approach is to generate insights that are more comprehensive, nuanced, contextually situated, and/or valid than could be achieved with a single approach [27, 28]. Mixed methods are particularly relevant for addressing complex problems in health sciences to capture the perspectives of key stakeholders including patients, providers, and organizations [29, 30].

The relative timing of the quantitative and qualitative methods is an important decision when designing and conducting mixed methods studies [27, 28]. For example, quantitative and qualitative methods can be used with sequential timing, with one approach building on the insights obtained from the other. An explanatory sequential approach involves the collection and analysis of quantitative data connected to a subsequent collection and analysis of in-depth qualitative data to explain or expand on the initial quantitative results. An exploratory sequential approach involves the collection and analysis of qualitative data that builds to a subsequent collection and analysis of quantitative data to assess or generalize the initial qualitative results. Alternatively, quantitative and qualitative methods can be implemented with concurrent timing. In a convergent approach, investigators collect and analyze quantitative and qualitative data during the same phase of the research and then merge the two sets of results to create a comprehensive or corroborated interpretation.

A central methodological consideration for all mixed methods approaches is integration, which occurs when investigators link the quantitative and qualitative aspects of a study to each other in ways that produce enhanced insights [27, 31, 32]. There are three common conceptualizations of mixed methods integration [31]. Connecting integration occurs when investigators use the results of one method to inform sampling decisions for another method (e.g., selecting interview participants based on quantitative response patterns). Building integration occurs when investigators use the results of one method to inform data collection procedures for another method (e.g., developing survey items based on qualitative themes and participant quotes). Merging integration occurs when investigators co-analyze, compare, or relate quantitative and qualitative data and results with each other (e.g., comparing quantitative statistical results with qualitative thematic perspectives). Achieving meaningful integration through connecting, building, and/or merging allows investigators to harness the full potential of a mixed methods approach but also represents the fundamental challenge of applying mixed methods research [33]. Numerous techniques have been developed to support integration in mixed methods studies from the design of research questions to the use of integrative analytic procedures [27, 31, 34, 35], but to date, the application of these techniques has been limited.

Mixed methods research can be applied to design rigorous methodologically sound intervention trials, including efficacy trials, effectiveness trials, and effectiveness-implementation hybrid trials that blend design components of clinical effectiveness and implementation research [21, 36, 37]. Fetters and Molina-Azorin [22] advocated that “the modus operandi for conducting interventional studies should be using a mixed methods approach.” By adding qualitative data collection to quantitative assessments of the intervention process and outcomes and intentionally integrating the two approaches, intervention studies become mixed methods research designs [27]. Qualitative data can be collected before, during, and after implementation of the intervention, and integrating the quantitative and qualitative data can help investigators understand not only whether an intervention works but also how and why or why not [22, 36]. Common applications of mixed methods in randomized controlled trials (RCTs) involve embedding qualitative research to explore barriers and facilitators to study recruitment [38, 39], intervention adherence [40, 41], and study retention [42, 43]. Qualitative research has also been used in RCTs to examine mechanisms [44] and contextual influences on interventions and outcomes [45].

We define mixed methods pilot feasibility studies as studies in which the investigators intentionally integrate quantitative and qualitative approaches to examine questions about the feasibility and acceptability of the intervention and study procedures from the perspectives of one or more key stakeholders. Similar to mixed methods intervention trials, the fundamental assumption of this approach is that mixing methods can help investigators better understand key questions central to the goals of pilot feasibility research.

Five planning considerations to optimize mixed methods pilot feasibility studies

In the sections that follow and summarized in Table 2, we provide guidance for five planning considerations that address fundamental questions for designing mixed methods pilot feasibility studies. These planning considerations draw from key methodological literature on (a) pilot study designs [4, 6, 19, 46], (b) the use of mixed methods in intervention research [22, 36, 47], and (c) mixed methods integration strategies [27, 31, 48]. We provide information about each consideration including an overview and recommended steps for applying them to a mixed methods pilot feasibility study. Although we present the considerations in discrete and linear steps, investigators should keep in mind that study planning is a complex and iterative process, and planning considerations should be selected and applied in an order that is most appropriate for a particular pilot feasibility study.

Table 2 Mixed methods planning considerations for pilot feasibility studies

Planning consideration 1: identify the feasibility domain(s) to examine with mixed methods

In the context of intervention development research, several domains of feasibility are often of interest [46, 49]. To maximize the use of mixed methods, we recommend that investigators identify the domain(s) of feasibility where there is the most uncertainty and/or potential to generate new knowledge that will inform the design of a future intervention trial. Table 1 provides examples of key feasibility domains of interest. Examples of uncertainty related to these domains include feasibility of online recruitment procedures for populations with limited access to and/or experience using technology, comprehension of randomization among people of lower socioeconomic backgrounds, data collection procedures that have not been previously tested with the target population, and adherence to in-person treatment to a population or in a setting with limited transportation. Not every feasibility domain may require mixing quantitative and qualitative methods and researchers risk unnecessarily complicating pilot study designs with a broad and unfocused application of mixed methods.

Once the feasibility domain(s) of interest are identified, investigators should state their reasons for planning to integrate mixed methods within their pilot feasibility studies. Three common reasons for mixing methods are presented on Table 3 and include the following: triangulation (to identify areas of corroboration and dissonance in the data by comparing quantitative and qualitative data), completeness (to gain a comprehensive understanding by synthesizing quantitative and qualitative information), and explanation (to explain results by connecting quantitative and qualitative information) [50, 51]. A useful strategy for conceptualizing a mixing reason is for investigators to specify a question that will require both quantitative and qualitative data and aligns with the particular reason [27, 52]. For example, “What is the recruitment rate (quantitative question) and what are the barriers and facilitators to recruitment (qualitative question)?” and “How do opinions about the intervention format (qualitative question) differ among participants with high vs. low levels of satisfaction with the intervention (quantitative question)?” By identifying specific questions that call for integrating the data in pilot feasibility studies, investigators can be intentional about their reasons for planning to mix methods within their study. See Table 3 for examples of different mixed methods integration questions about feasibility. A potential pitfall is for investigators to feel compelled to ask mixed methods questions about all feasibility domains, thereby increasing the scope of the pilot study beyond available resources. Instead, investigators should note that some domains may be best addressed with questions that call for quantitative methods or qualitative methods.

Table 3 Reasons and questions that call for mixing methods to examine feasibility domains

The following steps are recommended for identifying the feasibility domains that will be examined with and reasons for mixed methods:

  1. 1.

    Identify the feasibility domains of primary concern in the pilot study considering the current stage of development and existing knowledge about the intervention and trial methods.

  2. 2.

    Focus the plan to mix methods on the feasibility domain(s) where there is the most uncertainty regarding feasibility or complexity of the intervention or study procedures and/or potential to generate new knowledge that will inform future intervention trial design. Consider where a combination of methods could potentially provide additional insight and information needed to fully understand feasibility for reasons such as triangulation, completeness, or explanation. One or more domains can be selected for mixing depending on factors like pilot feasibility study timeline and resources.

  3. 3.

    For the domain(s) needing mixed methods, formulate mixed methods integration question(s) consistent with the reason for wanting to combine quantitative and qualitative information to understand and optimize feasibility. For example, if a comprehensive understanding of the recruitment domain is needed, one could ask whether recruitment procedures are feasible as planned and how they can be optimized for study contexts prior to a larger trial.

Planning consideration 2: align quantitative and qualitative data sources for the feasibility domain(s) selected for mixing methods

The second mixed methods planning consideration involves identifying and aligning the quantitative and qualitative data sources that will be used to address the mixed methods study questions [32, 53]. Investigators using mixed methods need to plan what data will be collected and from whom to support integration of quantitative and qualitative data. To facilitate this process within mixed method approaches, investigators are encouraged to develop data source tables that specify the different data sources (quantitative and/or qualitative) that will provide information about the feasibility domain(s) selected for mixing in the study [20, 54]. See Table 4 for an example of a data source table for mixed methods pilot feasibility studies. Such a table usually includes feasibility domain(s) in the rows and columns for the planned quantitative methods and qualitative methods. This organization assists the investigator with ensuring that both quantitative and qualitative information will be gathered to address the study’s mixed methods questions.

Table 4 Example data source table for selected domains in a mixed methods pilot feasibility study

In a mixed methods pilot feasibility study, the quantitative and qualitative data sources need to be aligned not only to the feasibility domain(s) selected for mixing but also with pre-specified criteria used to evaluate feasibility. The CONSORT extension states that “a decision process about how to proceed needs to be built into the design of the pilot trial” [15]. Some investigators find it helpful to use a traffic light system (stop-amend-go/red-amber-green) for evaluating progression to a main trial determined by a set of a priori criteria [55, 56]. Examples of progression criteria from pilot trial to a fully powered intervention trial include achieving a pre-specified rate of recruitment in a given time frame, retention or data completion, and levels of intervention acceptability from the perspective of participants [5]. Central to this approach is pausing to amend or refine the intervention and/or study procedures to meet progression criteria before advancing to next steps in the research. Published guidance is available to help investigators set progression criteria for a pilot feasibility study [7, 55, 57, 58].

Once feasibility domains selected for mixing methods have been specified and progression criteria set, investigators should decide which participants and other stakeholders should be included to provide the information necessary to examine the feasibility domains. Individuals who might be important to collect data from are enrolled participants, nonresponders, participants who drop out of the treatment and/or study, caregivers, nurses, physicians, clinic staff, and community members. For data sources, investigators should consider the full range of possible quantitative and qualitative methods to address their feasibility questions and progression criteria [19].

A common pitfall in mixed methods pilot studies is for investigators to default to a “usual” qualitative method, such as focus groups and interviews, without full consideration of the available options. Qualitative methods might include open-ended items on questionnaires, one-on-one interviews, focus groups, unstructured observations, field notes, session recordings, and photographs [59]. With all these options, another potential pitfall for mixed methods pilot feasibility studies occurs when investigators try to gather too much data from too many stakeholders, going beyond their resources and ability to manage it all. Thus, investigators should focus on gathering the data needed to answer the stated feasibility questions and evaluate progression criteria, keeping their resources and the associated ethical considerations in mind.

We recommend the following steps for aligning the data sources needed to address the feasibility domains of interest and plan for integration in a mixed methods pilot study:

  1. 1.

    Specify benchmarks and set clear progression criteria for determining feasibility for the domain(s) of interest.

  2. 2.

    Identify the most relevant participants for the selected feasibility domain(s). Consider who can best contribute to understanding the feasibility concerns, such as enrolled participants, nonresponders, participants who drop out of the treatment and/or study, caregivers, recruiters, intervention clinicians, clinic staff, and community members.

  3. 3.

    Identify the quantitative and qualitative data sources most appropriate for addressing the study’s questions about feasibility and determining whether benchmarks are met. Consider the full range of possible quantitative and qualitative methods and make decisions based on what needs to be learned and the study resources. Keep in mind that even a small sample or data in the form of observation field notes can provide useful information about feasibility.

  4. 4.

    Develop a data sources table that indicates which participants and data sources will provide information for each feasibility domain. Be clear as to which sources are considered quantitative and which are qualitative to see how they align to the study goals and corresponding research questions. Consider adding information about feasibility progression criteria and benchmarks for each domain as well.

Planning consideration 3: determine the timing of the quantitative and qualitative data collection within the flow of the pilot feasibility study activities

Mixed methods pilot feasibility studies commonly use concurrent mixed methods timing where quantitative and qualitative datasets are collected during the pilot and then analyzed and interpreted together after the intervention is piloted [27, 36]. However, investigators have many options to consider regarding when to collect the quantitative and qualitative data sources in relation to each other and in relation to the flow of activities of the pilot study. To work through decisions about when to collect different data sources, investigators can draw diagrams of their study procedures to plan the flow of the quantitative and qualitative research activities [27, 54]. For a mixed methods pilot study, such a diagram could be organized broadly in terms of activities occurring before, during, and after piloting the intervention [36] or more specifically following the phases of a pilot outlined in the CONSORT extension for pilot and feasibility trials flow diagram (e.g., screening, enrollment, allocation, and assessment) [15]. Investigators should consider the many options for when data could be gathered and determine the most appropriate points within the flow of activities to collect each type of data to optimize learning about the feasibility domain(s) selected for mixing in a pilot feasibility study.

In our experience, it is important to give special attention to the timing of the qualitative data collection. It is common for investigators to wait to gather the qualitative data until the end of a trial. While this approach may have strengths for investigating some feasibility questions, it can have limitations for other feasibility domains such as questions about recruitment, randomization, or retention. For example, to understand why participants withdraw from an intervention after allocation, investigators may want to interview participants soon after the event. Investigators can also plan to implement a flexible approach with regard to timing where qualitative data is collected as problems and implementation barriers arise so they may document information in real time that will help them make necessary refinements to the intervention and study protocol. Investigators might also plan more than one iteration of the pilot study where refinements are made based on initial learnings and then additional data is collected during a subsequent iteration.

The following steps are recommended for planning the timing of the different forms of data collection within the flow of a mixed methods pilot study:

  1. 1.

    Map out the flow of the major activities in the pilot study (e.g., recruitment, intervention, assessment). Some investigators may prefer to use the CONSORT flow diagram (http://www.consort-statement.org/extensions/overview/pilotandfeasibility) for this map, while others prefer to develop their own diagram using shapes and arrows.

  2. 2.

    Within the flow diagram, identify the specific points when the different quantitative and qualitative data sources could be gathered to maximize learning about the feasibility domain(s) selected for mixing.

  3. 3.

    Consider how the different sources of data can relate to each other. Two possibilities include the following:

    1. a

      Combine data by gathering both quantitative and qualitative data during the same stages of the pilot. For example, gather recruitment rates alongside field notes about recruiters’ interactions with potential participants and people making the referrals or gather qualitative observations during intervention sessions along with quantitative satisfaction surveys. Using a concurrent approach, the quantitative data on recruitment rates can be merged with qualitative observations to create a comprehensive or corroborated interpretation.

    2. b

      Link data by using information from one data source to make decisions about the sample for the other data source. For example, group participants by quantitative adherence scores (high vs. low) and select individuals from each group to interview qualitatively. Using a sequential approach, the quantitative results on adherence can be connected to the qualitative data collection and results to generate explanation and produce enhanced insights

  4. 4.

    Consider how the data collection activities relate to the intervention development and pilot study process. Early in the intervention development, recognize the value of an iterative approach to piloting where initial findings are used to improve the intervention and procedures before piloting again. In the later stages of development, be mindful that the timing of data collection interactions do not introduce bias through confounding influences that might interfere with the implementation of the intervention and raise validity concerns. For example, incorporating qualitative research methods that require additional participation in study activities may have an adverse effect on retention in the trial where additional commitments are required from participants.

Planning consideration 4: plan integrative analyses using joint displays to understand feasibility

The fourth planning consideration involves envisioning how to bring the quantitative and qualitative data, results, and interpretations together for joint interpretation using integrative analyses. There are a variety of approaches available for conducting integrative analyses [31, 34, 35]. One specific approach that has gained prominence within mixed methods is to develop joint displays [27]. Joint displays are tables (or figures) used to compare, synthesize, or interconnect quantitative and qualitative data, results, and/or interpretations to generate further insights [48]. While these displays can be effective for communicating integrated results within presentations or publications, they are also important analytic tools for investigators to bring the different strands of data together to generate new insights [54]. By developing joint displays, investigators are performing integrative data analyses. See Additional file 1 for template examples of joint displays and citations to published examples for the mixing reasons listed in Table 3.

A potential pitfall is for investigators to focus only on comparing quantitative and qualitative results for agreement without considering if and how to synthesize or interconnect the data. Comparing quantitative and qualitative results in a joint display aligns with triangulation, but as highlighted previously in Table 3, there are multiple reasons why investigators may want to bring together quantitative and qualitative results. For example, an investigator who seeks to explain differences in the feasibility of recruitment across clinics might plan to develop a table where each clinic site is a row and the columns summarize key results for each setting (e.g., quantitative monthly recruitment rates, qualitative barriers and facilitators identified from fieldnotes, and qualitative themes about cultural understandings from clinic partners). By arraying the different results by setting, the investigator might uncover contextual parameters related to recruitment and identify potential modifications to procedures to enhance the feasibility of recruitment.

Investigators wanting to plan integrative analyses using joint displays within their pilot feasibility studies are encouraged to use the following steps:

  1. 1.

    Review the feasibility domain(s) and the mixed methods questions that were asked and the different forms of data and results that are available. Plan to develop joint displays about the feasibility domain(s) selected for mixing and the reasons/questions that called for mixing methods.

  2. 2.

    For a mixing reason of triangulation, plan to develop a comparison joint display to compare quantitative and qualitative results about feasibility domain(s) to determine a substantiated answer to the question as to whether a study is feasible. Use the table rows to represent each major feasibility domain (or facet of a major domain). Use the table columns to represent the quantitative evidence, qualitative evidence, and overall (joint) interpretation. This table will juxtapose the quantitative and qualitative evidence for each domain so the researcher can determine the level of agreement in the evidence, including areas of corroborations and dissonance to inform decisions about about whether or not (or to what extent) the procedures are feasible. See Table 1a in Additional file 1 for an example comparison template.

  3. 3.

    For a mixing reason of completeness, plan to develop a synthesis joint display to synthesize complementary quantitative and qualitative information to develop a comprehensive understanding of feasibility domains in response to the study’s questions. For example, an investigator could assess the acceptability domain by mixing quantitative data from ratings of intervention satisfaction with qualitative data from interviews asking participants what they liked and did not like about the intervention. Similar to the comparison joint display, this joint display is likely organized by feasibility domains (the rows) and types of data/results (the columns). The number of columns would reflect the nature of the information that is being examined and might represent different data forms, different stakeholders, and/or different perspectives (e.g., facilitators and barriers). This table will summarize a broad range of findings about each domain to facilitate the investigator’s ability to synthesize the information and develop insights about the complexity of the feasibility of the study procedures. See Table 2 a and b in Additional file 1 for example synthesis templates.

  4. 4.

    For a mixing reason of explanation, plan to develop an interconnection joint display to interconnect quantitative and qualitative information to uncover differential patterns within feasibility domains and address questions about feasibility within different contexts. Start by identifying subgroups that may be important to consider in understanding feasibility within the study contexts. These subgroups might be based on location (e.g., different clinic sites), demographics (e.g., different cultural groups), quantitative measures (e.g., participants with high, medium, and low adherence), or qualitative types (e.g., participants described as fearful, apathetic, and optimistic from thematic analysis). Create a table that cross-tabulates the different groups (the rows) with the group’s corresponding quantitative and qualitative results (the columns). This table can then uncover patterns among the different groups and provide new insights that help to explain what was feasible for whom. See Table 3a in Additional file 1 for an example interconnection template.

Planning consideration 5: prepare to draw meta-inferences about feasibility from the integrated data

The final mixed methods planning consideration is for investigators to prepare to draw conclusions from the integrated quantitative and qualitative results to form meta-inferences. Meta-inferences require investigators to interpret and consider implications for what has been learned from the combination of quantitative and qualitative results from their studies [28]. Investigators should examine the quantitative, qualitative, and mixed methods data collected to draw conclusions and meta-inferences about overall feasibility in pilot feasibility studies where not all feasibility domains were selected for mixing. Investigators are encouraged to look for both consistencies and inconsistencies within the different sets of results and consider both as opportunities for learning and gaining insight about the study’s research questions and identifying implications for a future trial [27, 54]. If inconsistencies are discovered, investigators are encouraged to revisit their data and results and attempt to fully understand divergent results, which could lead to deeper insights about important nuances in the feasibility and acceptability of an intervention and/or study procedures.

These meta-inferences should inform the future trial’s intervention design, outcome measures, and/or methodology. Meta-inferences drawn from the combined quantitative and qualitative results may be ideally suited to provide nuanced insights into the modifications that are required to optimize the intervention and study procedures for the targeted participants or the areas that are in need of careful monitoring. However, a potential pitfall is for pilot study investigators to lose track of these insights throughout the pilot study process, particularly if the pilot study involves multiple iterations of testing and refining. Investigators can address this by developing an audit trail or log where they record each meaningful interpretation that occurs during the pilot study process and the corresponding implications for the future trial. Detailed records can be very useful for conveying to reviewers how the full trial is informed by the pilot study results.

Investigators should plan to draw meta-inferences about the feasibility of the future trial by considering the following steps:

  1. 1.

    Plan to interpret the quantitative, qualitative, and mixed methods results to draw conclusions and meta-inferences about the feasibility of the intervention and study procedures. When interpreting the mixed methods results, consider the insights gained from comparing, synthesizing, and interconnecting the quantitative and qualitative results.

  2. 2.

    Consider the implications of the meta-inferences for improving the intervention parameters. For example, the implications might suggest modifications for the intervention components, mode of delivery, intervention duration, cultural sensitivity of intervention materials, or role of supporting clinic personnel.

  3. 3.

    Consider the implications for optimizing the outcome measures. For example, the implications might suggest adjusting the frequency or duration of the measures, adding measures for previously unanticipated outcomes, or improving the appropriateness of the planned measures for participants.

  4. 4.

    Consider the implications for modifying the trial’s methodology. Possible modifications might include refining recruitment materials, using recruiters of similar cultural background as the target participants, selecting the most appropriate trial design (e.g., cluster randomized trial vs. non-randomized stepped wedge design based on stakeholder preferences), or refining the timing of quantitative and qualitative data collection.

  5. 5.

    Maintain an audit trail of modifications identified throughout the pilot study process, including the supporting evidence and meta-inferences that formed the basis of the conclusions. Use this audit trail when planning the next study as well as when describing in grant applications how the subsequent trial is informed by what was learned in the pilot feasibility study.

Conclusions

We identified a set of five planning considerations with specific steps for using mixed methods to optimize what can be learned from pilot feasibility studies to plan intervention trials. These planning considerations facilitate an investigator’s ability to successfully design a rigorous mixed methods approach that will achieve nuanced insights through meaningful integration in response to the pilot feasibility study’s goals. We encourage investigators to use these planning considerations as models for their own study planning as well as to spur their creativity in combining methods to address their feasibility questions. Although we have described the considerations in discrete and linear steps, investigators should keep in mind that study planning is a complex and iterative process, and planning considerations should be selected and applied in an order that is most appropriate for a particular pilot feasibility study. Furthermore, investigators should be flexible and responsive while conducting a mixed methods pilot feasibility study. Our methodological guidance aims to facilitate investigators’ ability to mix methods, but the planning should not restrict the possibility of following up on unexpected feasibility issues that may be uncovered during the pilot study implementation.

Pilot feasibility studies provide an important function within intervention research by helping investigators optimize their intervention and study procedures for future trials that will determine the efficacy and effectiveness of the intervention. Mixed method approaches have the potential to enhance a pilot study’s ability to generate nuanced and useful information about the feasibility of an intervention and the trial procedures, which can inform any needed modifications to optimize a future definitive trial. Challenges to a rigorous mixed methods approach include trying to apply mixed methods to all feasibility questions, collecting too much data that is difficult to interpret, and missing opportunities to gain new insights from the combination of different data and results. Through careful planning, investigators can address these challenges by identifying one or more key feasibility domains and reasons for mixing methods, focusing data collection sources and timing of data collection for the key domain(s) of interest, and using joint displays to help with integrative analyses and drawing meta-inferences.

Collectively, the planning considerations described in our guidance provide a practical approach to conceptualizing the elements required to optimize the use of mixed methods and achieve integrated insights. The key is for investigators to design a rigorous and focused mixed methods approach that not only includes both quantitative and qualitative data effectively and meaningfully but also integrates the quantitative and qualitative information in response to the pilot study’s specific feasibility questions. Our guidance can help investigators to consider the range of decisions needed at the study conceptualization stage to achieve a mixed methods approach and enhanced insights that will lead to the development of a future intervention trial.

References

  1. Onken LS, Carroll KM, Shoham V, Cuthbert BN, Riddle M. Reenvisioning clinical science: unifying the discipline to improve the public health. Clin Psychol Sci. 2014;2(1):22–34.

    Article  PubMed  PubMed Central  Google Scholar 

  2. Campbell M, Fitzpatrick R, Haines A, et al. Framework for design and evaluation of complex interventions to improve health. BMJ. 2000;321(7262):694–6. https://doi.org/10.1136/bmj.321.7262.694.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  3. O’Cathain A, Croot L, Duncan E, et al. Guidance on how to develop complex interventions to improve health and healthcare. BMJ Open. 2019;9(8):e029954. https://doi.org/10.1136/bmjopen-2019-029954.

    Article  PubMed  PubMed Central  Google Scholar 

  4. Orsmond GI, Cohn ES. The distinctive features of a feasibility study: objectives and guiding questions. OTJR. 2015;35(3):169–77. https://doi.org/10.1177/1539449215578649.

    Article  PubMed  Google Scholar 

  5. Mellor K, Eddy S, Peckham N, et al. Progression from external pilot to definitive randomised controlled trial: a methodological review of progression criteria reporting. BMJ Open. 2021;11(6):e048178. https://doi.org/10.1136/bmjopen-2020-048178.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Leon AC, Davis LL, Kraemer HC. The role and interpretation of pilot studies in clinical research. J Psychiatr Res. 2011;45(5):626–9. https://doi.org/10.1016/j.jpsychires.2010.10.008.

    Article  PubMed  Google Scholar 

  7. Bell ML, Whitehead AL, Julious SA. Guidance for using pilot studies to inform the design of intervention trials with continuous outcomes. Clin Epidemiol. 2018;10:153–7. https://doi.org/10.2147/CLEP.S146397.

    Article  PubMed  PubMed Central  Google Scholar 

  8. Blatch-Jones AJ, Pek W, Kirkpatrick E, Ashton-Key M. Role of feasibility and pilot studies in randomised controlled trials: a cross-sectional study. BMJ Open. 2018;8(9):e022233. https://doi.org/10.1136/bmjopen-2018-022233.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Kaur N, Figueiredo S, Bouchard V, Moriello C, Mayo N. Where have all the pilot studies gone? A follow-up on 30 years of pilot studies in clinical rehabilitation. Clin Rehabil. 2017;31(9):1238–48. https://doi.org/10.1177/0269215517692129.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Bowen DJ, Kreuter M, Spring B, et al. How we design feasibility studies. Am J Prev Med. 2009;36(5):452–7.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Thabane L, Ma J, Chu R, et al. A tutorial on pilot studies: the what, why and how. BMC Med Res Methodol. 2010;10(1):1. https://doi.org/10.1186/1471-2288-10-1.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Clark LT, Watkins L, Piña IL, et al. Increasing diversity in clinical trials: overcoming critical barriers. Curr Probl Cardiol. 2019;44(5):148–72. https://doi.org/10.1016/j.cpcardiol.2018.11.002.

    Article  PubMed  Google Scholar 

  13. Stewart AL, Nápoles AM, Piawah S, Santoyo-Olsson J, Teresi JA. Guidelines for evaluating the feasibility of recruitment in pilot studies of diverse populations: an overlooked but important component. Ethn Dis. 2020;30(Suppl 2):745–54. https://doi.org/10.18865/ed.30.S2.745.

    Article  PubMed  PubMed Central  Google Scholar 

  14. Thabane L, Cambon L, Potvin L, et al. Population health intervention research: what is the place for pilot studies? Trials. 2019;20(1):309. https://doi.org/10.1186/s13063-019-3422-4.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Eldridge SM, Chan CL, Campbell MJ, Bond CM, Hopewell S, Thabane L, et al. CONSORT 2010 statement: extension to randomised pilot and feasibility trials. Pilot Feasibility Stud. 2016;2(1):64.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Chan CL, Leyrat C, Eldridge SM. Quality of reporting of pilot and feasibility cluster randomised trials: a systematic review. BMJ Open. 2017;7(11):e016970.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Fletcher A, Jamal F, Moore G, Evans RE, Murphy S, Bonell C. Realist complex intervention science: applying realist principles across all phases of the Medical Research Council framework for developing and evaluating complex interventions. Evaluation (Lond). 2016;22(3):286–303.

    Article  Google Scholar 

  18. Baldeh T, MacDonald T, Kosa SD, Lawson DO, Stalteri R, Olaiya OR, et al. More pilot trials could plan to use qualitative data: a meta-epidemiological study. Pilot Feasibility Stud. 2020;6(1):164.

    Article  PubMed  PubMed Central  Google Scholar 

  19. O’Cathain A, Hoddinott P, Lewin S, Thomas KJ, Young B, Adamson J, et al. Maximising the impact of qualitative research in feasibility studies for randomised controlled trials: guidance for researchers. Pilot Feasibility Stud. 2015;1:32.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Creswell JW, Klassen AC, Plano Clark VL, Smith KC. Best practices for mixed methods research in the health sciences. Washington: National Institutes of Health; 2011. (https://obssr.od.nih.gov/research-resources/mixed-methods-research).

    Book  Google Scholar 

  21. Song M, Sandelowski M, Happ MB. Current practices and emerging trends in conducting mixed methods intervention studies in the health sciences. In: Tashakkori A, Teddlie C, editors. SAGE Handbook of Mixed Methods in Social & Behavioral Research. 2nd ed. Sage; 2010. p. 725–47.

    Chapter  Google Scholar 

  22. Fetters MD, Molina-Azorin JF. Utilizing a mixed methods approach for conducting interventional evaluations. J Mix Methods Res. 2020;14(2):131–44.

    Article  Google Scholar 

  23. Richards DA, Bazeley P, Borglin G, Craig P, Emsley R, Frost J, et al. Integrating quantitative and qualitative data and findings when undertaking randomised controlled trials. BMJ Open. 2019;9(11):e032081.

    Article  PubMed  PubMed Central  Google Scholar 

  24. Susskind L, McKearnan S, Thomas-Larmer J, eds. The Consensus Building Handbook: A Comprehensive Guide to Reaching Agreement. Thousand Oaks: SAGE Publications; 1999. Available from: https://sk.sagepub.com/reference/the-consensus-building-handbook.

  25. Johnson RB, Onwuegbuzie AJ, Turner LA. Toward a definition of mixed methods research. J Mix Methods Res. 2007;1(2):112–33.

    Article  Google Scholar 

  26. Plano Clark VL, Ivankova NV. Mixed methods research: a guide to the field. Thousand Oaks: Sage; 2016.

    Book  Google Scholar 

  27. Creswell JW, Plano Clark VL. Designing and conducting mixed methods research. 3rd ed. Thousand Oaks: Sage; 2018.

    Google Scholar 

  28. Tashakkori A, Johnson RB, Teddlie C. Foundations of mixed methods research: integrating quantitative and qualitative approaches in the social and behavioral sciences. 2nd ed. Thousand Oaks: Sage; 2021.

    Google Scholar 

  29. Curry LA, Krumholz HM, O’Cathain A, Plano Clark VL, Cherlin E, Bradley EH. Mixed methods in biomedical and health services research. Circ Cardiovasc Qual Outcomes. 2013;6(1):119–23.

    Article  PubMed  PubMed Central  Google Scholar 

  30. O’Cathain A, Murphy E, Nicholl J. Why, and how, mixed methods research is undertaken in health services research in England: a mixed methods study. BMC Health Serv Res. 2007;7(1):85.

    Article  PubMed  PubMed Central  Google Scholar 

  31. Fetters MD, Curry LA, Creswell JW. Achieving integration in mixed methods designs-principles and practices. Health Serv Res. 2013;48(6 Pt 2):2134–56.

    Article  PubMed  PubMed Central  Google Scholar 

  32. Plano Clark VL. Meaningful integration within mixed methods studies: identifying why, what, when, and how. Contemp Educ Psychol. 2019;57:106–11.

    Article  Google Scholar 

  33. Fetters MD, Freshwater D. The 1 + 1 = 3 integration challenge. J Mix Methods Res. 2015;9(2):115–7.

    Article  Google Scholar 

  34. Bazeley P. Integrating analyses in mixed methods research. London: Sage; 2018.

    Book  Google Scholar 

  35. O’Cathain A, Murphy E, Nicholl J. Three techniques for integrating data in mixed methods studies. BMJ. 2010;341: c4587.

    Article  PubMed  Google Scholar 

  36. Creswell JW, Fetters MD, Plano Clark VL, Morales A. Mixed methods intervention trials. In: Andrew S, Halcomb EJ, editors. Mixed methods research for nursing and the health sciences. Chichester: Wiley; 2009. p. 161–80.

    Google Scholar 

  37. O’Cathain A. A Practical Guide to Using Qualitative Research with Randomized Controlled Trials: Oxford University Press. 2018.

    Book  Google Scholar 

  38. Scantlebury A, McDaid C, Brealey S, Cook E, Sharma H, Ranganathan A, et al. Embedding qualitative research in randomised controlled trials to improve recruitment: findings from two recruitment optimisation studies of orthopaedic surgical trials. Trials. 2021;22(1):461.

    Article  PubMed  PubMed Central  Google Scholar 

  39. Foster JM, Sawyer SM, Smith L, Reddel HK, Usherwood T. Barriers and facilitators to patient recruitment to a cluster randomized controlled trial in primary care: lessons for future trials. BMC Med Res Methodol. 2015;15:18.

    Article  PubMed  PubMed Central  Google Scholar 

  40. Barankay I, Reese PP, Putt ME, Russell LB, Phillips C, Pagnotti D, et al. Qualitative exploration of barriers to statin adherence and lipid control: a secondary analysis of a randomized clinical trial. JAMA Netw Open. 2021;4(5):e219211.

    Article  PubMed  PubMed Central  Google Scholar 

  41. Huang Y-M, Shiyanbola OO. Investigation of barriers and facilitators to medication adherence in patients with type 2 diabetes across different health literacy levels: an explanatory sequential mixed methods study. Front Pharmacol. 2021;12:745749.

    Article  PubMed  PubMed Central  Google Scholar 

  42. Henshall C, Narendran P, Andrews RC, Daley A, Stokes KA, Kennedy A, et al. Qualitative study of barriers to clinical trial retention in adults with recently diagnosed type 1 diabetes. BMJ Open. 2018;8(7):e022353.

    Article  PubMed  PubMed Central  Google Scholar 

  43. Rodríguez-Torres E, González-Pérez MM, Díaz-Pérez C. Barriers and facilitators to the participation of subjects in clinical trials: an overview of reviews. Contemp Clin Trials Commun. 2021;23:100829.

    Article  PubMed  PubMed Central  Google Scholar 

  44. Warren E, Melendez-Torres GJ, Viner R, Bonell C. Using qualitative research to explore intervention mechanisms: findings from the trial of the Learning Together whole-school health intervention. Trials. 2020;21(1):774.

    Article  PubMed  PubMed Central  Google Scholar 

  45. Hovlid E, Bukve O. A qualitative study of contextual factors’ impact on measures to reduce surgery cancellations. BMC Health Serv Res. 2014;14:215.

    Article  PubMed  PubMed Central  Google Scholar 

  46. Freedland KE. Pilot trials in health-related behavioral intervention research: problems, solutions, and recommendations. Health Psychol. 2020;39(10):851–62.

    Article  PubMed  Google Scholar 

  47. Boeije HR, Drabble SJ, O’Cathain A. Methodological challenges of mixed methods intervention evaluations. Methodol Eur J Res Methods Behav Soc Sci. 2015;11(4):119–25.

    Google Scholar 

  48. Guetterman TC, Fetters MD, Creswell JW. Integrating quantitative and qualitative results in health science mixed methods research through joint displays. Ann Fam Med. 2015;13(6):554–61.

    Article  PubMed  PubMed Central  Google Scholar 

  49. Pearson N, Naylor P-J, Ashe MC, Fernandez M, Yoong SL, Wolfenden L. Guidance for conducting feasibility and pilot studies for implementation trials. Pilot Feasibility Stud. 2020;6(1):167.

    Article  PubMed  PubMed Central  Google Scholar 

  50. Bryman A. Integrating quantitative and qualitative research: how is it done? Qual Res. 2006;6(1):97–113.

    Article  Google Scholar 

  51. Greene JC, Caracelli VJ, Graham WF. Toward a conceptual framework for mixed-method evaluation designs. Educ Eval Policy Anal. 1989;11(3):255–74.

    Article  Google Scholar 

  52. Onwuegbuzie AJ, Leech NL. Linking research questions to mixed methods data analysis procedures. The Qualitative Report. 2006;11(3):474–98.

    Google Scholar 

  53. Yin RK. Mixed methods research: are the methods genuinely integrated or merely parallel. Res Sch. 2006;13(1):41–7.

    Google Scholar 

  54. Fetters MD. The mixed methods research workbook: activities for designing, implementing, and publishing projects. Thousand Oaks: Sage; 2020.

    Google Scholar 

  55. Hallingberg B, Turley R, Segrott J, Wight D, Craig P, Moore L, et al. Exploratory studies to decide whether and how to proceed with full-scale evaluations of public health interventions: a systematic review of guidance. Pilot Feasibility Stud. 2018;4(1):104.

    Article  PubMed  PubMed Central  Google Scholar 

  56. Herbert E, Julious SA, Goodacre S. Progression criteria in trials with an internal pilot: an audit of publicly funded randomised controlled trials. Trials. 2019;20(1):493.

    Article  PubMed  PubMed Central  Google Scholar 

  57. Young HML, Goodliffe S, Madhani M, Phelps K, Regen E, Locke A, et al. Co-producing progression criteria for feasibility studies: a partnership between patient contributors, clinicians and researchers. Int J Environ Res Public Health. 2019;16(19):3756.

    Article  PubMed Central  Google Scholar 

  58. Hampson LV, Williamson PR, Wilby MJ, Jaki T. A framework for prospectively defining progression rules for internal pilot studies monitoring recruitment. Stat Methods Med Res. 2018;27(12):3612–27.

    Article  PubMed  Google Scholar 

  59. Creswell JW, Poth CN. Qualitative inquiry & research design: choosing among five approaches. 4th ed. Thousand Oaks: Sage; 2018.

    Google Scholar 

Download references

Acknowledgements

The authors acknowledge with gratitude the MMRTP scholars and faculty who shared their experiences applying mixed methods in pilot feasibility studies and provided inspiration for this work.

Funding

This work was made possible with support from the Mixed Methods Research Training Program (MMRTP) for the Health Sciences supported by the Office of Behavioral and Social Sciences Research (OBSSR) and the National Institute of Mental Health under Grant R25MH104660 (PI: Gallo). In addition, Drs. Aschbrenner and Kruse were supported by the Implementation Science Center for Cancer Control Equity, a National Cancer Institute funded program (P50 CA244433). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institute of Mental Health or the National Cancer Institute.

Author information

Authors and Affiliations

Authors

Contributions

The authors confirm contribution to the paper as follows: initial conception of methodological guidance, Drs. Aschbrenner and Plano Clark; draft manuscript preparation, Drs. Aschbrenner and Plano Clark; and iterative review and feedback on manuscript drafts, Drs. Kruse and Gallo. The authors read and approved the final manuscript.

Corresponding author

Correspondence to Kelly A. Aschbrenner.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Aschbrenner, K.A., Kruse, G., Gallo, J.J. et al. Applying mixed methods to pilot feasibility studies to inform intervention trials. Pilot Feasibility Stud 8, 217 (2022). https://doi.org/10.1186/s40814-022-01178-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40814-022-01178-x

Keywords