Feasibility of a quality improvement project with an interrupted time series design to increase adherence to evidence-based pulmonary embolism diagnosis in the emergency department.

Background: Many evidence-based clinical decision tools are available for the diagnosis of pulmonary embolism (PE). However, these clinical decision tools have had suboptimal uptake in the everyday clinical practice in emergency departments (EDs), despite numerous implementation efforts. We aimed to test the feasibility of a multi-faceted intervention to implement an evidence-based PE diagnosis protocol. Methods: We conducted an interrupted time series study in three EDs in Ontario, Canada. We enrolled consecutive adult patients accessing the ED with suspected PE from January 1, 2018 to February 28, 2020. Components of the intervention were: clinical leadership endorsement; a new pathway for PE testing; physician education; personalized condential physician feedback, and collection of patient outcome information. The intervention was implemented in November 2019. We identied six criteria for dening the feasibility outcome: successful implementation of the intervention in at least two of the three sites; capturing data on ≥ 80% of all CTPAs ordered in the EDs; timely access to electronic data; rapid manual data extraction with feedback preparation before the end of the month ≥ 80% of the time; time required for manual data extraction and feedback preparation ≤ 2 day per week in total. Results: The intervention was successfully implemented in two out of three sites. 5,094 and 899 patients were tested for PE in the period before and after the intervention, respectively. We captured data from 90% of CTPAs ordered in the EDs, and we accessed the required electronic data. The manual data extraction and individual emergency physician audit and feedback were consistently nalized before the end of each month. The time required for manual data extraction and feedback preparation was ≤ 2 days per week (14 hours). Conclusions: We proved the feasibility of implementing an evidence-based PE diagnosis protocol in two EDs. We were not successful implementing the protocol in the third ED. Registration: The study was not registered.


Introduction Background
The diagnosis of pulmonary embolism (PE) is a multi-step process. At least 10 different clinical decision tools are available, 1 mainly aimed at reducing the use of advanced imaging techniques such as computed tomography pulmonary angiography (CTPA) or ventilation-perfusion scan (VQ). Clinical decision tool use is endorsed by recent guidelines. 2 Despite high quality evidence supporting the use of these diagnostic algorithms, published data show that these tools are seldom used in everyday clinical practice. [3][4][5][6] The main effect of this is the over-use of CTPA, which translates into excess radiation exposure, the possibility of contrast-induced nephropathy, overtreatment, and reduction of health system e ciency and resources. This gap persists despite numerous efforts to improve the implementation of evidence-based diagnostic pathways. 6,7 The reasons why implementation studies to date had minimal or Page 3/21 no impact on the use of CTPA include the perception that clinical decision support systems are complicated, have a negative impact on productivity, are not supported by a su cient body of evidence, and are not better than clinical judgement. 6,7 Why the evidence-practice gap?
de Wit and colleagues conducted a nation-wide think-aloud interview study with 63 emergency department (ED) physicians from nine sites (unpublished data). This study found that the sources of variance in decision making arose from: 1) Physicians' risk tolerance for missing a PE diagnosis being very low; 2) Physicians are con dent in their gestalt and think their suspicion is su cient to order a CTPA; 3) The Wells score is perceived as complicated and can lead to either under-or over-estimation of clinical probability, and; 4) Having to order a D-dimer blood test, waiting for the results, and then ordering a CTPA if needed is perceived as delaying an inevitable scan. Using the Theoretical Domains Framework, this same group interviewed ED physicians within 2 weeks of them having deviated for evidence-based protocol for PE diagnosis (unpublished data). They found the following themes: 1) Reserved con dence in the clinical guidelines and ability to apply them; 2) Belief that strong clinical gestalt is crucial when testing for PE; 3) Fear of not diagnosing PE; 4) Belief that the advantages of the standard protocol outweigh the disadvantages. Lastly, our research group has also performed semi-structured interviews with ED patients who were being tested for PE. 8 Patient interviews revealed zero tolerance for false positive and false negative diagnoses, association of rapid ED assessment with better quality testing, preference for individualized testing, contradictory acceptance of CT limitations for PE diagnosis, overestimation of pretest probability for PE, association between more tests with better quality testing, preference for imaging over clinical examination to exclude PE, primary concern being testing related to previous heart-related issue, a focus on pain symptoms rather than underlying diagnosis, and a preference for direct interaction with ED physician.
It is time to create a new implementation method for evidence-based PE diagnosis in the ED. An effective strategy would be safe for patients, use resources judiciously, and bene t physicians as well as patients.
Considering the barriers to the successful implementation of evidence-based diagnostic strategies highlighted in previous studies, 6-8 we thought it is crucial to use a clinical decision tool that is simple, safe, and supported by high level quality of evidence. Furthermore, we decided to design a multi-faceted intervention, ensuring leadership endorsement and targeting patients and healthcare workers, with a focus on physicians. However, given the challenges and the negative results from previous quality improvement studies for PE diagnosis, we decided to assess the feasibility of our intervention in two centers in Hamilton, Ontario and one in Ottawa, Ontario before attempting to assess its effect or to implement the protocol on a larger scale. This report will focus on the feasibility aspects of the study.
The primary objective of this study was to assess the feasibility of implementing an evidence-based PE diagnosis protocol in three EDs. The secondary objective was to report data on the period preceding the intervention and preliminary data on the rst three months post intervention.

Study design.
This is a study on the feasibility of a quality improvement intervention, with a before-after design.
Context/Study setting. Population studied. This implementation study used electronic data to identify patients tested for PE in the ED. The population was consecutive adults (aged 18 years and older) with suspected PE, for whom a D-dimer blood test and/or imaging for PE (CTPA or VQ scan) was performed in the ED. When a patient books into the ED, the triage nurse assigns a chief complaint from a selection of prede ned categories, classi ed based on the Canadian Emergency Department Information Systems (CEDIS)Presenting Complaint List. 10 D-dimer blood test is used to diagnose both deep vein thrombosis and PE, so to ensure we captured only patients tested for PE (and not deep vein thrombosis), we aimed to restrict our population to those who presented to the ED with the presenting complaint "chest pain" (cardiac and non-cardiac) and/or "shortness of breath." To evaluate whether a su cient proportion of all patients tested for PE were registered under these two presenting complaints, we retrieved a list of CTPAs ordered in the ED in 2018.
We manually extracted the presenting complaints for each case. We aimed to capture a minimum of 80% of the CTPAs ordered in the ED.

Implementation.
We led a Canadian Association of Emergency Physician (CAEP) working group consisting of six emergency physicians from across Canada with expertise in PE diagnosis and knowledge translation. This knowledge broker group systematically reviewed the literature and identi ed all optimal PE diagnostic strategies for the ED, as well as optimal ways to encourage adherence to this diagnostic strategy. As a result, we decided to test a multimodal intervention aimed at promoting the uptake of Ddimer in everyday clinical practice. The implementation strategy is based on the knowledge translation recommendations from CAEP. 11 The components of the intervention are detailed in Table 1. This implementation strategy was discussed at each site by engaging with local champions, hospitals managers, nurses, diagnostic imaging staff, support staff, and physicians to identify and implement strategies to overcome local barriers. Table 1 description of the components of the intervention.

Leadership endorsement
We obtained approval from the clinical and managerial leads for the ED, radiology, hematology and thrombosis for a new protocol for the diagnosis of PE.
Ordering D-dimer and CTPA/VQ scan We moved from the concept of ordering D-dimer or imaging for PE, to the broader concept of "testing for PE". We created a new order set (Appendix A) which guides ED testing for PE.
The new diagnostic PE pathway starts with D-dimer blood testing in all patients.
We no longer asked the physician to calculate the Wells score to simplify the process and to avoid having physicians arti cially increasing the score in order to avoid using D-dimer.
The testing process has been semi-automated. If the D-dimer result is lower than the threshold, the attending physician is noti ed by the nurse and PE is excluded. If the D-dimer result is higher than the threshold, the patients goes directly for a CTPA without the need for physician reassessment. The physician is noti ed when the imaging report is available.
We made the new PE diagnostic pathway attractive to use by enabling ordering of CTPA without the requirement to rst discuss with a radiologist.

Physicians education
We met with the ED physicians and nurses with educational material to support the use of the proposed diagnostic work ow.
Personalized con dential physician feedback We sent each physician a quarterly con dential personalized report containing the following: The proportion of eligible patients (based on the presenting complain) who had an imaging test, expressed as a percentage: (number of exams requested)*100/(total number of eligible patients) The proportion of imaging tests ordered without D-dimer or despite a negative D-dimer, expressed as a percentage: (number of cases in which the algorithm has not been followed in patients receiving imaging)*100/(total number of imaging test performed) These metrics were calculated for the individual physician, and compared to the average of all the physicians working in the same ED.
The form was piloted with some of the study clinical investigators (the research manager and two ED physicians with expertise in quality improvement and knowledge translation) and then with a convenience sample of four physicians. The form was modi ed according to their feedback.

Patients information
We developed patient information about the testing process, as well as the risks and bene ts of undergoing CT scanning. Moreover, the PE testing order set incorporated nurse facilitated identi cation of patient-speci c goals (for example treatment of pain) so the treating ED physician can focus their treatment and advice on patient-speci c needs. ED: emergency department; PE: pulmonary embolism; CTPA: computed tomography pulmonary angiography.

Comparison and timelines
A ow chart describing the timeline is reported in Fig. 1. Data on baseline clinical practice were collected from January 1, 2018 to October 31, 2019. The intervention was implemented in November 2019. Data on the period after the comparison were collected starting in December 2019. For the purpose of this report, we present the post-implementation data up till the end of February 2020.

Outcomes
Primary outcome -feasibility: Feasibility was described using 6 criteria:

4.
Timely manual chart data extraction. To be considered feasible, the data extraction had to be completed ve days before the end of the following month ≥ 80% of the times. 5. 5. Implementation of individual emergency physician audit and feedback. To be considered feasible, we required that feedback data on the previous month was complete before the end of the following month ≥ 80% of the time.
6. 6. An estimate of the number of hours of research assistant time to extract the required data and synthesize the physician feedback reports (total number of hours per week). To be considered feasible if ≤ 2 days/week.
Secondary outcomes -preliminary estimates of effect: The outcomes used for the preliminary estimate of the effect of the intervention were the following: 5. e. Proportion of imaging tests ordered without D-dimer or despite a negative D-dimer: (number of cases in which the algorithm was not followed in patients receiving imaging)/(total number of imaging test performed) 6. f. Proportion of imaging tests not ordered, despite D-dimer positivity: (number of cases in which imaging was indicated and not performed)/(total cases in which imaging was indicated) 7. g. We also described the prevalence of PE, as follows: All PEs, central PE (segmental or more), and distal PE (Sub-segmental).
Balancing measure: A before-after comparison of the number of D-dimer blood tests ordered in the ED.

Analysis
Baseline patient characteristics and the feasibility measures were reported using standard descriptors of central tendency and variability (mean and standard deviation or median and ranges as appropriate). The secondary outcomes regarding effect and balancing measure were reported descriptively, with the 95% con dence intervals (CIs) for the proportions' differences. To facilitate visual inspection, the outcomes were also plotted against time with two regression lines, before and after the intervention. All the analyses were conducted with STATA/IC v. 16 (StataCorp LP, College Station, TX, USA).

Ethics
Research ethics approval was obtained from participating sites prior to commencing the study (Hamilton Integrated Research Ethics Board # 5339-C).

Results
Primary outcome: Feasibility of the study A summary of the results for the feasibility of the study is reported in Table 2. Access to all the required electronic data Successful access to all the data Yes.
Timely manual data extraction Completed 5 days before the end of the following month ≥ 80% of the times.
Yes, 100% of the times.

Implementation of individual emergency physician audit and feedback
Feedback on the previous month ready to be emailed before the end of the following month ≥ 80% of the times.
Yes, 100% of the times.
Number of hours of research assistant time to extract the required data and synthesize the physician feedback reports.

Implementation at participating hospitals
The intervention was implemented at the Hamilton sites, but not at the Montfort hospital in Ottawa. The PE diagnostic order set was initiated on October 28, 2019. The topic of PE diagnosis was discussed at the HHS ED physicians rounds and three educational podcasts were recorded. [12][13][14] All emergency physicians received an email explaining the rationale for the intervention and its objective, accompanied by a Frequently Asked Question section and educational material (email text available in Appendix B). As a reminder to the ED physician group, we attached laminated stickers with a logo and the invitation to "rule out PE without CT" to each computer in their o ces (Fig. 2). A team of three nurse educators engaged in individual meetings with the ED nurses, explaining the aim of the intervention and the new work ow. A multidisciplinary team including managers, radiologists, emergency physicians, educators, radiation technologists, and nurses met regularly to review progress.
We collaborated with regular project meetings with the Montfort Hospital ED. In May 2019, our colleagues let us know they were not able to participate in the study due to lack of data access and resources.

Identi cation of the population
We found that the two selected CEDIS presenting complaints (chest pain and shortness of breath) only captured 70% of all CTPAs ordered by emergency physicians. Therefore, inclusion criteria were expanded to 13 presenting ED presenting complaints (Table 3), allowing us to capture 90% of the ED ordered CTPAs. The remaining 10% were dispersed among 40 presenting complaints, each one accounting for 0.1-0.9% of all CTPAs (Table S1).

Obtaining electronic data
We requested data from three sources: the hospital decision support services, an internal hospital research database and the eHealth Information Technology Services (eHITS) o ce. The rst two sources were unable to provide timely data (at the end of each month). The eHITS department was able to provide us with the required data in the required turnaround time. After working together to de ne the database queries and to validate the data, we received the rst nalized dataset in January 2019. The system is now automated with monthly updates.

Manual data extraction
We found the variable "ordering doctor" for the CTPAs, in the electronic medical record (EMR) was not accurate due to some scans being ordered by an admitting service but the order was logged under the emergency physician's name. It was crucial for us to have accurate data for the ordering physician or the physician audit and feedback would lose credibility. Therefore, we manually checked the ordering physician data. Despite this increase in the workload, the manual extraction was always completed before the end of the following month.

Implementation of individual physician feedback
The path toward the implementation of physician feedback proved to be challenging and many adaptations of the original plan were required. Initially, we aimed to provide individual feedback to physicians on a monthly base. When reviewed, we realized that the number of CTPAs ordered per month per physician was too low (range 0-6 CTPAs per month per physician). The proportion of inappropriate CTPAs would have been subject to enormous variability for very small variations in the actual number of non-appropriate CTPAs ordered. We decided to reduce the frequency of feedback from monthly to quarterly and the feedback was issued at the end of the rst three months post implementation. An example of the physician feedback is reported in Fig. 3.

Estimate of research assistant time
Based on the rst three months, we calculated that a research assistant is required for twelve hours per week on the project. In addition, a physician should spend two hours per month to resolve queries. Preparing and checking individual feedback requires an additional six hours per month, on average. Therefore, the total amount of time required to complete these tasks was approximately 14 hours per week (less than two days); thereby, meeting the feasibility criterium.
Secondary outcomes: preliminary estimates of effect The patients' characteristics and outcome distribution are reported in Table 4. In total, 81,103 patients accessed the HHS EDs for one of the selected presenting complaints between January 1, 2018 and February 28, 2020 (70,932 in the before-intervention period and 10,171 after the intervention). Of these, 5,993 patients were tested for PE and 2,267 patients underwent CTPA or VQ scanning. 285 patients (0.4% of the study population) were diagnosed with acute PE.  *Unless otherwise speci ed CI: con dence interval, Q1: rst quartile, Q2: third quartile, PE: pulmonary embolism Tables 7.2% and 8.8% of the study population were tested for PE before and after the intervention, respectively. There was an 8.1% (95% CI 5.0; 11.2) increase in the adherence to the proposed protocol. The imaging positive yield showed a trend toward reduction (-2.6%, 95%CI -13.1; 8.0). The time trends for PE testing is reported in Fig. 4 and the time trends for the remaining secondary outcomes are reported in the appendix ( Figures S1-S6).

Secondary outcomes: balancing measures
In our study population, a D-dimer was ordered in 6.6% and 8.5% of the patients before and after the intervention, respectively.

Discussion
In our study, we proved the feasibility of implementing an evidence-based PE diagnosis protocol in the two EDs in Hamilton, ON. The implementation was not successful at a hospital in Ottawa, ON. We were able to obtain timely electronic data which identi ed 90% of the CTPAs ordered in the Hamilton EDs. We found that this implementation protocol takes approximately 14 hours per week of research assistant and investigator time. While we successfully implemented the intervention in two out of three centres, we faced numerous barriers. We were expecting to meet some resistance to change due to bureaucracy and human nature, but this translated in delays greater than expected: we were aiming at implementing the intervention in May 2019, but we actually implemented it in November 2019. We encountered measurement barriers, in that it took some time to identify a timely source of electronic data, which required additional manual chart extraction.
The secondary outcome preliminary results showed a signal of improved adherence to the proposed intervention, with fewer imaging tests ordered without D-dimer or despite a negative D-dimer, and more imaging tests ordered after a positive D-dimer. However, for now, this did not translate into a reduction of the use of imaging tests, nor in an increase of the positive yield of imaging. The results suggest an increase in the use of D-dimer and imaging tests, and the prevalence of PE in the population remained the same. It is still too soon to claim that the intervention is futile, but these preliminary results should not be ignored. For this intervention to be meaningful and before scaling it up to a multicenter study, we will need to carefully assess its effect in our population. Historically, most implementation strategies have failed to reduce the number of imaging tests and improve the diagnostic yield of imaging. 7 , By embedding a clinical decision support tool for ordering CTPAs in the computerized order entry system, Prevedello et al. showed a small reduction in the use of CTPAs (from 26.5-24.3/1000 patients visits, p < 0.2) and an increase in the yield of CTPAs (from 9.2-12.6%, p < .01). 15 Later on, the same research group failed to further improve these outcomes implementing a performance feedback report for ED physicians.
One explanation for our nding may be that we chose not to implement an adjusted D-dimer strategy (such as the YEARS algorithm, 16 clinical probability 17 or age-adjusted 18 D-dimer). We chose this plan for simplicity. It may be that implementing an adjusted D-dimer strategy would have been more effective and we are considering changing the intervention in this direction.

Strengths and limitations
Strengths of our study are: 1) the careful review of the existing evidence on the topic and the mixed methods research program that preceded and informed the design of the intervention; 2) the involvement of a multidisciplinary team both for designing and endorsing the intervention; 3) the multi-faceted nature of the intervention aimed at tackling the problem from several angles; 4) the piloting and consequent adaptation of the intervention, and; 5) the assessment of its feasibility.
The main limitation of the study is the before-after nature of the comparison. The results might be biased by confounders that changed over time. The generalizability of the study results, both in terms of feasibility and evaluation of the effect of the intervention may be reduced because the intervention was implemented only in Hamilton EDs. A multicenter step-wedge trial would have mitigated these limitations.
For example, the ongoing COVID-19 pandemic appears to be changing the population attending EDs. We expect to see a reduction of visits for complaints that not related to COVID-19. We also expect the ED personnel to work under a higher level of stress, which might jeopardize the adherence to our protocol. Another limitation is that we were not able to present data on the safety of the intervention. To address this limitation, we are collecting data on the EDs access of any patient with PE in the 30 days preceding the diagnosis. This will increase the data extraction workload but will allow us to nd out if these patients could have been diagnosed before and were not, and if this is due to lack of adherence to the protocol or to insu cient safety of the protocol itself.

Conclusions
We proved the feasibility of implementing an evidence-based PE diagnosis protocol in two Hamilton EDs, but we were unable to implement the protocol in an Ottawa ED. The lessons we learned could be useful to researchers willing to implement similar interventions in the future.
Lessons learned 1. One of the three participating centres could not implement the intervention, mainly for lack of dedicated nancial and human resources.
2. Substantial efforts were required to access the electronic data, and some of the data had to be extracted manually. 3. Piloting the intervention allowed us to understand what needed to be changed to achieve feasibility. 4. The main change implemented after the feasibility assessment were: adapting the inclusion criteria; reducing the frequency of the feedback, and; making the feedback content lighter and simpler to understand.

Availability of data and materials
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Competing interests
All authors declare that they have no competing interests.

Supplementary Files
This is a list of supplementary les associated with this preprint. Click to download. CONSORTextensionforPilotandFeasibilityTrialsChecklist.doc Appendix.pdf