Skip to main content
  • Study Protocol
  • Open access
  • Published:

Testing the feasibility, acceptability, and preliminary effect of a novel deliberate practice intervention to reduce diagnostic error in trauma triage: a study protocol for a randomized pilot trial

Abstract

Background

Non-compliance with clinical practice guidelines in trauma remains common, in part because physicians make diagnostic errors when triaging injured patients. Deliberate practice, purposeful participation in a training task under the oversight of a coach, effectively changes behavior in procedural domains of medicine but has rarely been used to improve diagnostic skill. We plan a pilot parallel randomized trial to test the feasibility, acceptability, and preliminary effect of a novel deliberate practice intervention to reduce physician diagnostic errors in trauma triage.

Methods

We will randomize a national convenience sample of physicians who work at non-trauma centers (n = 60) in a 1:1 ratio to a deliberate practice intervention or to a passive control. We will use a customized, theory-based serious video game as the basis of our training task, selected based on its behavior change techniques and game mechanics, along with a coaching manual to standardize the fidelity of the intervention delivery. The intervention consists of three 30-min sessions with content experts (coaches), conducted remotely, during which physicians (trainees) play the game and receive feedback on their diagnostic processes. We will assess (a) the fidelity with which the intervention is delivered by reviewing video recordings of the coaching sessions; (b) the acceptability of the intervention through surveys and semi-structured interviews, and (c) the effect of the intervention by comparing the performance of trainees and a control group of physicians on a validated virtual simulation. We hypothesize that trainees will make ≥ 25% fewer diagnostic errors on the simulation than control physicians, a large effect size. We additionally hypothesize that ≥ 90% of trainees will receive their intervention as planned.

Conclusions

The results of the trial will inform the decision to proceed with a future hybrid effectiveness-implementation trial of the intervention. It will also provide a deeper understanding of the challenges of using deliberate practice to modify the diagnostic skill of physicians.

Trial registration

Clinical trials.gov (NCT05168579); 23 December 2021.

Peer Review reports

Contributions to the literature

  • Improving adherence to trauma triage clinical practice guidelines has the potential to improve the care provided to the 1 million injured patients who present each year to non-trauma centers in the USA.

  • Existing methods of increasing adherence to guidelines in trauma have had limited success.

  • We describe the development of a novel deliberate practice intervention to reduce diagnostic errors in trauma triage.

  • This work contributes to the literature by providing a new method of fostering the use of clinical practice guidelines and will enhance understanding of health professionals’ diagnostic processes.

Background

A guiding principle in trauma care is that severely injured patients should receive treatment at trauma centers (highly resourced, accredited hospitals) to reduce preventable morbidity and mortality, while minimally injured patients should receive treatment at non-trauma centers to minimize the costs of care [1,2,3]. Professional organizations have published well-validated guidelines that specify criteria for triage—the categorization of patients as having minor or severe injuries—with the objective of minimizing both under- and over-triage [4]. Despite 40 years of efforts to disseminate the clinical practice guidelines, non-compliance remains common [5,6,7,8].

In a series of experimental and observational studies, we identified diagnostic errors as a major cause of non-compliance at non-trauma centers [9, 10]. We found these errors occurred in part because physicians relied on heuristics (pattern recognition or intuitive judgments) to screen patients [11]. In other words, physicians reduced a complex question (“does this patient meet the criteria for transfer to a trauma center?”) to a simpler one (“is this patient badly injured or not?”) [12]. However, features considered pathognomonic for severe injuries occurred infrequently and did not capture the nuances of clinical practice guidelines [13]. Consequently, decisions based on responses to the simpler question resulted in predictable errors in judgment. Few interventions exist that improve the diagnostic skills of practicing physicians.

Intervention conceptual model

In prior work, we collaborated with Schell Games (Pittsburgh, PA) to develop two serious video games—video games used for applied purposes—to reduce diagnostic errors in trauma triage. In clinical trials, we found that exposure to the games had a small to moderate effect, reducing under-triage by 10–18% [14, 15]. Consistent with best practice guidelines for the development of behavioral interventions, we decided to further refine the interventions to maximize their efficacy before proceeding with widespread distribution [16]. Deliberate practice, defined as goal-oriented, coach-supervised training, has facilitated the acquisition of expertise in domains as disparate as aviation combat and chess [17,18,19]. The method requires that coaches observe performance on a representative task, identify opportunities for improvement, and provide timely, specific feedback that the learner can use to refine their behavior [19]. As described by Ericsson et al., the efficacy of deliberate practice depends on three variables [17]. First, deliberate practice requires the identification of a representative training task, defined as one that captures the essence of expertise in the domain, and that allows trainees to practice their skills in a consistent and reproducible manner. Ideally, training tasks align challenge with skill, increasing in difficulty as performance improves. Additionally, they should allow for distributed practice, with training spaced across time so that knowledge can transfer from working to long-term memory. Second, deliberate practice entails the delivery of immediate, high-quality feedback to allow the trainee to acquire and to refine the skills necessary to improve their performance on the training task. The coach should provide task-specific, concise feedback, recommending precise actions for behavior change. Third, deliberate practice benefits from the creation of a collaborative relationship between the coach and trainee that fosters autonomous motivation (the desire to perform a task because it generates innate satisfaction or aligns with deeply held values) in the trainee. The development of rapport increases the likelihood that the trainee will persist in practice, remain open to challenging new experiences, and participate in self-reflective processes.

In procedural domains in medicine, a recent systematic review showed a strong correlation between the use of deliberate practice and positive educational outcomes (r = 0.71) [20]. However, diagnosis involves cognitive processes that occur unconsciously, making them difficult to observe and to analyze [21]. This may explain why use of deliberate practice to influence diagnostic skill occurs much less frequently [22]. We hypothesized that a deliberate practice intervention might improve diagnostic skill in trauma triage, provided we could develop an appropriate training task where trainees could practice making diagnoses and where coaches could observe their process and provide useful, actionable advice on how to improve their performance. Through an iterative process, our multi-disciplinary team with expertise in adult education, behavioral science, deliberate practice, qualitative research methods, emergency medicine, and trauma surgery developed a deliberate practice intervention to improve the diagnostic skill of emergency medicine physicians working at non-trauma centers by tackling each of these variables in turn.

Aims and hypotheses

The aims of this study are to test the feasibility, acceptability, and preliminary effect of the novel intervention in a pilot trial. We hypothesize that physicians exposed to the intervention will under-triage ≥ 25% fewer patients than physicians in the control arm (primary outcome). We further hypothesize that ≥ 90% of trainees will receive their intervention as planned (secondary outcome).

Methods

Trial design

This study will adhere to the CONSORT guidelines (extension for pilot and feasibility trials) for reporting clinical trials (see Additional file 1). To evaluate the deliberate practice intervention, we will recruit a convenience sample of emergency physicians and randomize them in a 1:1 ratio to the intervention or to a passive control group. Members of the intervention group (‘trainees’) will be paired with a content expert (a ‘coach’), and will receive three, weekly, 30-min remote coaching sessions during which they play a customized, theory-based video game on Zoom and receive feedback on the diagnostic processes they use to triage trauma patients. We will structure the process evaluation using Proctor's Framework for Outcomes in Implementation Research [23]. We will record the coaching sessions and will review the recordings to assess the feasibility and the fidelity of intervention delivery. We will assess the acceptability, adoption, and appropriateness of the intervention through surveys and semi-structured interviews. Finally, we will compare the performance of participants in the intervention and control group on a validated virtual simulation, using under-triage (the proportion of severely injured patients not transferred to a trauma center) as an interim measure of efficacy [3].

Trial participants

Coaches

Three members of the study team with content expertise in trauma (DM, RF) and emergency medicine (JE) will act as the coaches, guiding trainees through the experience of playing the video game, providing instruction on how to diagnose severely injured patients, and reinforcing best practice triage principles. Before the start of the trial, coaches will receive three 1-h training sessions to learn game mechanics, to review the coaching manual, and to gain experience in the use of pedagogical strategies that foster adult learning. We will invite five local emergency medicine residents and advance practice providers who staff emergency departments to participate in these practice coaching sessions. Members of the team with expertise in adult education (RA), deliberate practice (DW), and behavioral science (BF) will observe the sessions and will provide feedback to the coaches on their performance.

Trainees

Using a strategy that has proven successful in the past, we will recruit physicians to serve as trainees through respondent-driven sampling. We will contact physicians who have participated in our research previously (N ~ 600) and will ask them to refer us to two colleagues who might be willing to participate in the study. Eligible physicians must treat adult patients in the Emergency Department of either a non-trauma center or a Level III/IV trauma center in the USA. We will obtain digital consent from eligible physicians, informing them that the study focuses on evaluating how best to disseminate clinical practice guidelines in trauma. At the time they provide consent, they will also complete a questionnaire describing their personal characteristics.

Randomization and blinding

A member of the study team (DM) will assign eligible physicians to intervention or control group in a 1:1 ratio, using a randomization schema built using block sizes of 4. After enrolling a participant, she will obtain the intervention assignment from a central database and will inform participants of their assignment. Although we cannot blind study personnel and participants to the intervention after allocation, we will mask condition assignment during the analysis phase.

Study protocol

After randomization, participating physicians will receive written instructions on how to complete study tasks. We will ask those in the intervention group to select one of the two 3-week blocks of coaching sessions, and will mail them an iPad with the video game and the Zoom app pre-loaded. We will pair trainees with a coach (DM, JE, RF) and will ask coach-trainee dyads to schedule three 30-min weekly sessions during their selected block. At the completion of the 3 weeks, we will ask trainees to participate in a semi-structured, debriefing interview to assess the acceptability of the intervention and to use an online virtual simulation to assess the effect of the intervention on diagnostic errors in triage. We will ask passive controls to complete the same simulation within 3 weeks of enrollment. Study tasks will take approximately 3 h for those in the intervention group and 1 h for those in the control group. Participants will receive personalized reminder emails at weekly intervals for the duration of the study, or until they complete their tasks.

We will use a financial incentive to increase response rates, setting the size using a wage-based model of reimbursement. Physicians in the intervention group will keep the iPad as an honorarium (approximate value $300), while those in the control group will receive a $100 gift card conditional on the completion of the simulation.

Deliberate practice intervention

The deliberate practice intervention consists of three, weekly, 30-min coaching sessions, conducted remotely. The trainee will play a serious video game (i.e., the training task) on their iPad, sharing their screen through the Zoom app with the coach. Coaches will use the coaching manual to structure these sessions and to ensure the standardization of content. Specifically, the coaches will observe trainees' performance as they play the game, and will provide feedback on the process they use to make diagnoses in trauma triage. The objective of the training sessions is to refine physicians' pattern recognition of severely injured trauma patients (i.e., their heuristics).

Training task

Shift with Friends, developed in collaboration with Schell Games (Pittsburgh, PA), has 10 levels, each with a 5-step game loop: players triage 10 injured patients over 90 s, compare two of the cases to identify similarities/differences so that they can derive the ‘rule’ for the level, receive standardized feedback on their performance, have the option of triaging an additional 10 cases over 90 s, and finally review the decision principle. The game is grounded in the method of analogical encoding—the idea that the process of performing structured case comparisons allows players to derive decision principles for themselves and therefore makes those decision principles memorable [24]. The game uses five behavior change techniques (repetition and substitution, shaping of knowledge, feedback and monitoring, goal setting, and provision of observable samples of behavior) and delivers them using game mechanics that include time-pressure, variation in the difficulty of different levels, and drag-and-drop mechanics (see Table 1) [25, 26]. The behavior change techniques and game mechanics make the diagnostic task explicit, allowing the coach to observe the trainee’s train-of-thought and to identify precise opportunities for improvement. The graded difficulty of the different levels allows the coach to titrate the complexity of the training task to ensure that novices do not become overwhelmed, while more expert physicians do not become bored. The 5-step game loop, which occurs over the course of 5 to 6 min, facilitates repeated practice sessions. Finally, the game includes standardized feedback, which provides opportunities for a coach to offer additional, personalized comments tailored to the trainee's specific goals.

Table 1 Description of deliberate practice intervention (Shift with Friends)

Coaching manual

To maximize the fidelity of the intervention delivery, we developed a manual to serve as a guide for coaches. The manual specifies the decision principles and the structure of each session [27]. For example, in the first session, participants will cover the decision principle that they should consider the number of body regions involved in the injury when making a triage decision: individually minor or moderate injuries could cumulatively sum to a severe injury complex. We include sample scripts and prompts for coaches to use as they progress across the sessions [27]. The scripts range from suggested language about how to establish rapport (e.g., enlisting them as partners in the endeavor to improve patient outcomes) to recommendations about how to customize feedback based on the trainee’s goals (e.g., “you mention that you struggle to communicate with surgeons at the referral center, here’s what you might say when asking if you can transfer your patient”). The prompts include phrasing for questions, structured so that they progress from perception (e.g., “what do you see here?”) to knowledge building (e.g., “what does this mean for our decision principle?”) to checking for understanding (e.g., “what might happen if you were not able to transfer the patient?”), and probes to elicit thought processes (“tell me more about that?”) [27]. Finally, the coaching manual includes technical vocabulary that coaches can use as a reference and a detailed description of each level of the game with exemplars that covered content, bugs, and predictable errors that might occur during game play by trainees [27].

We refined the coaching manual iteratively based on observations made during a series of practice coaching sessions, introducing additional pedagogical strategies to improve coaching performance (see Table 2) [28]. For example, coaches could not always quickly parse the etiology of errors made during game play, impeding their ability to provide relevant, concise feedback to trainees. We therefore introduced a think-aloud technique, asking trainees to articulate the trajectory of their thoughts as they triaged patients or compared cases. We will report further adaptations to the intervention during the pilot trial using the FRAME (Framework for Reporting Adaptation and Modifications–Expanded) checklist [29]. We provide a schematic of the components of the intervention in Fig. 1, a logic model of the intervention in Additional file 2, screenshots of the game (Shift with Friends) in Fig. 2, and a current draft of the coaching manual in Additional file 3.

Table 2 Key pedagogical strategies emphasized in the coaching manual. We iteratively refined the coaching manual to specify relevant pedagogical principles based on observations during practice coaching sessions [26]
Fig. 1
figure 1

Framework depicting the process of intervention development. We attempt to make transparent how each component of the intervention is intended to intervene on the behavioral process, with relationships among phases depicted with arrows

Fig. 2
figure 2

Screenshots from Shift with Friends demonstrating the steps in the game loop from level 1. a triage of 10 cases in 90 s. b generic feedback provided by in-game character on performance during triage round. c Review of contextual cues to identify generalizable principles. d Generation of summative decision principle. Game play is supplemented by interactions with the coach to ensure that content is tailored to the trainee’s goals and that feedback is personalized to their performance

Data sources and management

We summarize the timeline, study procedures, and the data collection plan in Table 3.

Table 3 Summary of timeline, study procedures, and data collection

Questionnaire to assess personal characteristics

At the time of enrollment, each physician participant will answer questions about age, sex, race, ethnicity, educational background (board certification, ATLS certification, years since completion of residency), and practice environment (trauma designation of their hospital).

Tracking database

We will maintain a tracking database with a list of scheduled coaching sessions. One member of the study team (DM) will update the database daily with the status of sessions (completed v. not, videotaped v. not), which we will use to assess the feasibility of delivering the intervention.

Coaching sessions

All the coaching sessions will be recorded and uploaded to a secure server. Two coders (KJR, JLB) will review the recordings, applying the coaching manual to assess the fidelity of intervention delivery, and the Wisconsin Surgical Coaching Rubric to evaluate the performance of the coaches. The Rubric assesses four domains, asking if the coach (1) engages the trainee as an equal participant in learning; (2) uses questions and prompts to guide trainee in self-reflection; (3) provides constructive feedback; (4) guides goal setting [30].

Post-intervention debriefing materials

After completing the coaching sessions, we will ask participants in the intervention group to complete the User Engagement Scale–Short Form to assess their engagement with the intervention [31]. The scale has 12 items that measure focused attention, perceived usability, aesthetic appeal, and reward. Additionally, two qualitative researchers (KJR, JLB) will conduct 20-min semi-structured interviews with trainees, probing their opinion the acceptability, adoption, and appropriateness of the intervention, inviting suggestions for change, and asking their opinion about the quality of the coaching.

Virtual simulation to assess effect size

Trial participants will log into a website to complete a virtual simulation designed assess physician decision making in trauma triage. We previously collaborated with a gaming company (Breakaway Ltd.; Hunt Valley, MD) to develop a 2D simulation to reflect the environment of an Emergency Department at a non-trauma center. The simulation has both internal reliability as well as criterion validity. Importantly, we have found that, at the group level, physicians make similar decisions for trauma patients on the simulation as they do in real-life [11]. Responses to the virtual simulation will be transmitted from the website to a secure server hosted by the University of Pittsburgh.

The simulation includes ten cases: four severely injured patients, two minimally injured patients, and four critically ill non-trauma cases. Users must evaluate and manage these cases over 42 min, simulating a busy ED shift. New patients arrive at pre-specified (but unpredictable) intervals, so that physicians must manage multiple patients concurrently. Each case includes a 2D rendering of the patient, a chief complaint, vital signs which update every 30 s, a history, and a written description of the physical exam. Without appropriate clinical intervention by the player, severely injured patients and critically ill distractor patients decompensate and die over the course of the simulation.

Physicians manage patients by selecting from a pre-specified list of 250 medications, studies, and procedures. Some orders affect patients’ clinical status, leading to corresponding changes in their vital signs and physical exam. Other orders generate additional information, presented as reports added to the patients’ charts. Each case ends when the player either makes a disposition decision (admit, discharge, transfer) or the patient dies.

Analyses

We will include in the implementation outcome analysis physicians who do not complete all three coaching sessions (i.e., intention-to-treat). We will exclude from the service outcome analysis those who do not complete the simulation (i.e., those who have missing data). We will summarize physician characteristics using means (standard deviations) for continuous variables and proportions (%) for categorical variables.

Feasibility, fidelity, acceptability, adoption, and appropriateness (implementation outcomes)

We will use Proctor’s framework to define the implementation outcomes for the process evaluation [23]. We will define ‘feasibility’ as the practicability of delivering this intervention as planned, and will quantify the proportion of coach-trainee dyads that complete all three 30-min training sessions. We will define ‘fidelity’ as adherence to the coaching manual and will quantify the number of session components provided to each participant. As a secondary measure of this construct, we will compare differences in the quality of coaching across domains of the Wisconsin Surgical Coaching Rubric. We will define ‘acceptability’ as the perception that a given intervention is agreeable, palatable, or satisfactory, ‘adoption’ as the intention to try the intervention, and ‘appropriateness’ as the perceived fit of the intervention. To assess these constructs, we will summarize responses to the User Engagement Scale-Short Form, and will code the semi-structured interviews, categorizing responses to the range of questions about the acceptability (e.g., “did the experience meet your expectations?”), adoption (e.g., “have you been able to use any of the information provided in the sessions?”), appropriateness of the intervention (e.g., “how appropriate was the intervention for you?”), suggestions about how to improve the experience (e.g., “do you have any suggestions to increase the feasibility of implementation?”), and the quality of the coaching (e.g., “what do you think the coach added to the experience?”).

Efficacy (service outcome)

For the purposes of this analysis, we will define ‘efficacy’ as compliance with clinical practice guidelines in the triage of trauma patients and will concentrate on disposition decisions made during the simulation. We will score the decisions for each severely injured trauma case based on American College of Surgeons guidelines as triaged appropriately or not [3]. We will summarize triage decisions at the group-level and calculate the proportion of under-triage by group:

$$\frac{\textrm{number}\ \textrm{of}\ \textrm{severely}\ \textrm{injured}\ \textrm{patients}\ \textrm{not}\ \textrm{transferred}\ \textrm{to}\ \textrm{trauma}\ \textrm{centers}}{\textrm{total}\ \textrm{number}\ \textrm{of}\ \textrm{severely}\ \textrm{injured}\ \textrm{patients}}$$

We will assess the effect of the intervention compared with the control on trainee performance using generalized linear models, clustered at the trainee level.

Decision to proceed with a definitive trial of the intervention

Results from the analysis will inform the decision about whether to proceed with a future definitive trial of the intervention. Given the complexity of the intervention, we will classify the pilot trial as a success if the intervention has a large effect (i.e., exceeding that of the video games alone) and if ≥90% of trainees receive the intervention as planned.

Human subjects and power calculation

We used Cohen’s method of estimating power for behavioral trials, basing our calculation on the assumption the data will be continuously and normally distributed [32]. We have designed the experiment to detect a 25% (large effect size) reduction in under-triage between physicians in the intervention and control groups, with an alpha of 0.05 and a power of 80%. Based on these estimates and anticipating a 67% retention rate in the control arm, we plan to recruit 60 physicians (30 physicians per group).

Security, ethics, and dissemination

Data security

On enrollment in the trial, participants will receive a unique identifier. All participants will use that identifier to login to the website that hosts the virtual simulation. Those assigned to the deliberate practice group will use it to access Shift with Friends as well. Only the primary investigator and the qualitative researchers will have access to the linkage file connecting the identifier to the physician’s name and contact information. This file will be encrypted and stored on a secure server at the University of Pittsburgh.

Ethics

The University of Pittsburgh Institutional Review Board approved this study (STUDY20120026). We do not plan any interim analyses. We will ask participants to communicate any adverse events or unintended effects of participation via email. We have registered the trial on clinicaltrials.gov (NCT05168579).

Dissemination of results

Results from the study will be reported to the public through manuscripts and oral presentations at national meetings. We will provide an abstract of the findings to all participants. Access to the de-identified dataset will be made available upon written request to the study team.

Discussion

This paper summarizes the protocol we will use to test the feasibility of using a deliberate practice intervention to improve physician diagnostic skill in trauma triage. Our overarching aim is to increase physician adherence to clinical practice guidelines at non-trauma centers. Strengths of the intervention include an explicit grounding in theory, translation of deliberate practice to the refractory problem of physician diagnostic skill, and an iterative, user-centered design process focused on ensuring the fidelity of intervention delivery.

Few interventions exist to improve the diagnostic skill of physicians who have completed graduate medical education. The gold standard in trauma triage is Advanced Trauma Life Support [33]. Designed by the American College of Surgeons, the textbook-based course exposes learners to rule-based algorithms and essential skills over 16 h. Implicitly, ATLS uses the rational actor model of decision making as its theory of behavior, and attempts to shape knowledge as its behavior change technique [34, 21]. Over 1 million providers have received their ATLS certification; yet surprisingly little evidence exists that certification changes performance in practice [33, 35]. Based on formative research, we believe that the dual process model of cognition better explains diagnostic skill in trauma triage and have experimented with different methods of behavior change aligned with this theory to improve triage practices [12, 36]. Prior interventions have had small to moderate effect sizes [14, 15, 37]. Consequently, in response to best-practice guidelines for the development of behavioral interventions, we will now test deliberate practice as an alternative method of behavior change to ensure that we have maximized the efficacy of our interventions before proceeding to widespread distribution [16].

Deliberate practice is an appealing adjunct because of successes in other domains (e.g., music, sports, combat) and theoretical compatibility with the dual process model of cognition underlying our video games [17,18,19]. However, its application to influence diagnostic skill has occurred infrequently, perhaps because of the difficulty of designing appropriate training tasks [22]. Diagnosis routinely occurs under complex task conditions, difficult to replicate in the laboratory or classroom. Patients rarely present with pathognomonic features. Physicians must make decisions rapidly and while distracted by competing demands on their attention [38]. Moreover, diagnosis occurs unconsciously [39]. Consequently, coaches may struggle to understand the source of errors and to provide useful, actionable feedback. To address these challenges, we selected a puzzle video game, where players must triage 10 patients over 90 s, as the basis of our training task, combined with a think-aloud approach to allow the coach insight into the thought processes of the trainee. If successful, this approach offers a potential template for others interested in using deliberate practice to improve diagnostic skill.

Our intervention and protocol development focused on the importance of ensuring the fidelity of intervention delivery, as recommended by the NIH's Science of Behavior Commission [40]. The efficacy of deliberate practice depends on the ability of the coach to provide personalized, relevant feedback and to foster a collaborative relationship with the trainee that motivates him/her to engage with learning. Coaches need a wide variety of complex skills to accomplish these tasks; lack of the skills can compromise the trainee experience, and can have a negative effect on behavioral outcomes [41]. We developed a coaching manual with sample scripts, question prompts, and didactic information to guide coaches as they delivered the content of the intervention. We iteratively refined the manual based on observations made during practice and pilot coaching sessions, specifying pedagogical principles that coaches should use to address predictable difficulties that arose as users engaged with the material [28]. We anticipate identifying additional opportunities to improve our protocol for delivering the intervention during this pilot trial.

The study has several potential limitations. First, we will use a convenience sample to test the efficacy of the interventions, which may not represent the general population of physicians who serve in non-trauma emergency departments. Second, we use a virtual simulation with a limited number of cases to assess outcome rather than decisions made in practice. Our previous validation study provides evidence of the simulation’s ability to predict group-level performance in practice, making it, we believe, a reasonable interim outcome measure. If the present study affirms the potential of such interventions, real-world efficacy and effectiveness testing would be warranted. Third, the size of the sample precludes the ability to adjust for the influence of individual coach differences on the estimate of the effect of the intervention. We plan qualitative analyses to evaluate the quality of the coaching, which will inform future tests of the intervention. Fourth, we selected deliberate practice as an adjunct to our existing video game as a means of augmenting its efficacy without iteratively testing a full panel of options. This decision had a pragmatic justification: a multiphase optimization strategy approach (arguably the best-practice method of developing effective behavioral interventions) would have exceeded our limited resources [42]. Moreover, we had conceptual reasons to believe that deliberate practice could effectively change behavior. However, our failure to consider a full suite of methods of behavior change may have limited the rigor of the work.

Conclusions

We developed a novel intervention to improve diagnostic skill in trauma triage using principles adapted from both the dissemination and implementation literature and the literature on the acquisition of expertise. We will test the fidelity, acceptability, and efficacy of the intervention in a pilot feasibility trial, which will allow us to understand the success of our theoretical behavioral and design principles.

Availability of data and materials

Shift with Friends is available for download on the iOS Apple Store. A de-identified dataset will be made available upon request to the PI, and after appropriate authorization by the University of Pittsburgh Office of Research.

References

  1. US Department of Health and Human Services. Model Trauma System Planning and Evaluation. https://www.hsdl.org/?view&did=463554. Published Feb 2006. Accessed 28 Dec 2021.

  2. American Trauma Society. Trauma Center Levels Explained. https://www.amtrauma.org/page/traumalevels. Accessed 27 Sept 2020.

  3. Committee on Trauma – American College of Surgeons. Resources for optimal care of the injured patient 2006. Chicago: American College of Surgeons; 2006.

    Google Scholar 

  4. American College of Surgeons – Committee on Trauma. Advanced Trauma Life Support for Doctors: Student Course Manual. Chicago: American College of Surgeons; 2020.

    Google Scholar 

  5. Delgado MK, Yokell MA, Staudenmayer KL, et al. Factors associated with the disposition of severely injured patients initially seen at non-trauma center emergency departments. JAMA Surg. 2014;149:422–30.

    Article  Google Scholar 

  6. Zhou Q, Rosengart MR, Billiar TR, et al. Factors associated with non-transfer in trauma patients meeting American College of Surgeons’ criteria for transfer at nontertiary centers. JAMA Surg. 2017;152:369–76.

    Article  Google Scholar 

  7. Mohan D, Rosengart MR, Farris C, et al. Assessing the feasibility of the American College of Surgeons’ benchmarks for the triage of trauma patients. Arch Surg. 2011;146:786–92.

    Article  Google Scholar 

  8. Mohan D, Barnato AE, Rosengart MR, et al. Triage patterns of patients with moderate-to-severe injuries presenting to non-trauma centers. Ann Surg. 2015;261:383–9.

    Article  Google Scholar 

  9. Mohan D, Barnato AE, Angus DC, et al. Determinants of compliance with transfer guidelines for trauma patients: a retrospective analysis of CT scans acquired prior to transfer to a level I trauma center. Ann Surg. 2010;251:946–51.

    Article  Google Scholar 

  10. Mohan D, Rosengart MR, Farris C, et al. Sources of non-compliance with clinical practice guidelines in trauma triage: a decision science study. Implement Sci. 2012;7:103.

    Article  Google Scholar 

  11. Mohan D, Angus DC, Ricketts D, et al. Assessing the validity of using serious game technology to analyze physician decision making. PLoS One. 2014;9:e105445.

    Article  Google Scholar 

  12. Kahneman D, Frederick S. Representativeness revisited: attribute substitution in intuitive judgment. In: Gilovich T, Griffin D, Kahneman D, editors. Heuristics and Biases: The Psychology of Intuitive Judgment. New York: Cambridge University Press; 2002.

    Google Scholar 

  13. Kulkarni K, Dewitt B, Fischhoff B, et al. Defining the representativeness heuristic in trauma triage: a retrospective observational cohort study. PLoS One. 2019;14:e021220.

    Article  Google Scholar 

  14. Mohan D, Farris C, Fischhoff B, et al. Efficacy of educational video game versus traditional educational apps at improving physician decision making in trauma triage: a randomized clinical trial. BMJ. 2017;359:j5416.

    Article  Google Scholar 

  15. Mohan D, Fischhoff B, Angus DC, et al. Using serious video games to improve physicians’ heuristics in trauma triage: a randomized clinical trial. PNAS. 2018;115:9204–9.

    Article  CAS  Google Scholar 

  16. Onken LS, Carroll KM, Shoham V, et al. Re-envisioning clinical science: unifying the discipline to improve public health. Clin Psychol Sci. 2014;2:22–34.

    Article  Google Scholar 

  17. Ericsson KA. Deliberate practice and the acquisition and maintenance of expert performance in medicine and related domains. Acad Med. 2004;79:S70–81.

    Article  Google Scholar 

  18. Ericsson A, Pool R. Peak: secrets from the new science of expertise. Boston: Houghton Mifflin Harcourt; 2017.

    Google Scholar 

  19. Ericsson KA, Krampe RT, Tesch-Romer C. The role of deliberate practice in the acquisition of expert performance. Psychol Rev. 1993;100:363–406.

    Article  Google Scholar 

  20. McGaghie WC, Issenberg SB, Cohen ER, et al. Does simulation-based medical education with deliberate practice yield better results than traditional clinical education? A meta-analytic comparative review of the evidence. Acad Med. 2011;86:706–11.

    Article  Google Scholar 

  21. Kahneman D. A perspective on judgment and choice: mapping bounded rationality. Am Psychol. 2003;58:697–720.

    Article  Google Scholar 

  22. Badjulnour RE, Parsons AS, Muller D, et al. Deliberate practice at the virtual bedside to improve clinical reasoning. NEJM. 2022. https://doi.org/10.1056/NEJMe2204540.

  23. Proctor E, Silmere H, Raghavan R, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Admin Pol Ment Health. 2011;38:65–76.

    Article  Google Scholar 

  24. Moran S, Bereby-Meyer Y, Bazerman M. Stretching the effectiveness of analogical training in negotiations: teaching diverse principles for creating value. Negot Confl Manag Res. 2008;1:99–134.

    Google Scholar 

  25. Michie S, Richardson M, Johnston M, et al. The behavior change technique taxonomy (v1) of 93 hierarchically clustered techniques: building an international consensus for the reporting of behavior change interventions. Ann Behav Med. 2013;46:81–95.

    Article  Google Scholar 

  26. Perski O, Blandford A, West R, et al. Conceptualizing engagement with digital behavior change interventions: a systematic review using principles from critical interpretive synthesis. TBM. 2017;7:254–67.

    Google Scholar 

  27. Lemov D. The coach's guide to teaching. Clearfield: John Catt Educational Ltd; 2020.

    Google Scholar 

  28. Lemov D. Teach like a champion 3.0. Hoboken: Jossey-Bass; 2021.

    Google Scholar 

  29. Hoffman TC, Glasziou PP, Milne R. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ. 2014:348. https://doi.org/10.1136/bmj.g1687.

  30. Vande Walle KA, Pavaluri Quame SR, Beasley HL, et al. Development and assessment of the Wisconsin Surgical Coaching Rubric. JAMA Surg. 2020;155:486–92.

    Article  Google Scholar 

  31. O’Brien HL, Cairns P, Hall M. A practical approach to measuring user engagement with the refined user engagement scale (UES) and the new UES short form. Int J Hum Comput Stud. 2018;112:28–39.

    Article  Google Scholar 

  32. Cohen J. Quantitative methods in psychology: a power primer. Psychol Bull. 1992;112(1):155–9.

    Article  CAS  Google Scholar 

  33. American College of Surgeons. About Advanced Trauma Life Support. https://www.facs.org/quality-programs/trauma/atls/about. Accessed 26 Sept 2020.

  34. Sanfey AG, Loewenstein G, McClure SM, et al. Neuroeconomics: cross-currents in research on decision making. Trends Cogn Sci. 2006;10:108–16.

    Article  Google Scholar 

  35. Mohammad A, Branicki F, Abu-Zidan FM. Educational and clinical impact of Advanced Trauma Life Support (ATLS) courses: a systematic review. World J Surg. 2014;28:322–9.

    Article  Google Scholar 

  36. Kahneman D, Klein G. Conditions for intuitive expertise: a failure to disagree. Am Psychol. 2009;64:515–26.

    Article  Google Scholar 

  37. Mohan D, Chang CC, Fischhoff B, et al. Outcomes after a digital behavior change intervention to improve trauma triage: an analysis of Medicare claims. J Surg Res. 2021;268:532–9.

    Article  Google Scholar 

  38. McGlynn EA, Schneider EC, Kerr EA. Reimagining quality measurement. NEJM. 2014;371:2150–1.

    Article  Google Scholar 

  39. National Academies of Sciences, Engineering, and Medicine. Improving diagnosis in health care. Washington DC: The National Academies Press; 2015.

    Google Scholar 

  40. Bellg AJ, Resnick B, Minicucci DS, et al. Enhancing treatment fidelity in health behavior change studies: best practices and recommendations from the NIH Behavior Change Consortium. Health Psychol. 2004;23:443–51.

    Article  Google Scholar 

  41. Ritchie MJ, Parker LE, Kirchner JE. From novice to expert: methods for transferring implementation facilitation skills to improve healthcare delivery. Implement Sci Commun. 2021;2:39.

    Article  Google Scholar 

  42. Collins L. Optimization of behavioral, biobehavioral, and biomedical interventions: the multiphase optimization strategy (MOST). New York City: Springer Nature; 2018.

    Book  Google Scholar 

Download references

Acknowledgements

The authors thank the game development team at Schell Games.

Funding

• DP2 LM012339 (Mohan)

• R21 AG072072 (Mohan)

• K23 NS097629 (Elmer)

Author information

Authors and Affiliations

Authors

Contributions

Study concept, design, analysis, interpretation: dm, je, ra, rmf, bf, kr, jb, dw. Drafting of the manuscript: dm. Critical revision of the manuscript for important intellectual content: je, ra, rmf, bf, kr, jb, dw. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Deepika Mohan.

Ethics declarations

Ethics approval and consent to participate

The University of Pittsburgh Institutional Review Board approved this study (STUDY20120026).

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

CONSORT 2010 checklist of information to include when reporting a pilot or feasibility trial*.

Additional file 2.

Logic diagram of the intervention and its anticipated mechanism of action. We show the outcomes that we will not assess during this pilot trial with an asterisk.

Additional file 3.

Coaching Manual for Shift with Friends (version 6).

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mohan, D., Elmer, J., Arnold, R.M. et al. Testing the feasibility, acceptability, and preliminary effect of a novel deliberate practice intervention to reduce diagnostic error in trauma triage: a study protocol for a randomized pilot trial. Pilot Feasibility Stud 8, 253 (2022). https://doi.org/10.1186/s40814-022-01212-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40814-022-01212-y

Keywords