Skip to main content

Enhancing national audit through addressing the quality improvement capabilities of feedback recipients: a multi-phase intervention development study

Abstract

Background

National audits are a common, but variably effective, intervention to improve services. This study aimed to design an intervention to increase the effectiveness of national audit.

Methods

We used interviews, documentary analysis, observations, co-design and stakeholder engagement methods. The intervention was described in an intervention manual and illustrated using a logic model. Phase 1 described the current hospital response to a national audit. Phase 2 identified potential enhancements. Phase 3 developed a strategy to implement the enhancements. Phase 4 explored the feasibility of the intervention alongside the National Audit of Dementia and refined the intervention. Phase 5 adapted the intervention to a second national audit (National Diabetes Audit). Phase 6 explored the feasibility and fidelity of the intervention alongside the National Diabetes Audit and used the findings to further refine the intervention.

Results

The developed intervention is a quality improvement collaborative (QIC), containing virtual educational workshop, virtual outreach for local team leads and virtual facilitation of a learning collaborative delivered after feedback has been received. The QIC aims to support national audit recipients to undertake improvement actions tailored to their local context. The target audience is clinical and clinical governance leaders. We found that actions from national audit were constrained by what the clinical lead perceived they deliver personally, these actions were not aligned to identified influences upon performance. We found that the hospital response could be enhanced by targeting low baseline performance, identifying and addressing influences upon to performance, developing trust and credibility, addressing recipient priorities, presenting meaningful comparisons, developing a conceptual model, involving stakeholders and considering the opportunity cost. Phase 3 found that an educational workshop and outreach strategy could support implementation of the enhancements through developing coherence and cognitive participation. We found feasibility could be increased by revising the content, re-naming the intervention, amending activities to address time commitment, incorporating a more structured analysis of influences, supporting collaboration and developing local feedback mechanisms. Phase 5 found adaptation to a second national audit involved reflecting differences in the clinical topic, context and contractual requirements. We found that the behaviour change techniques identified in the manual were delivered by facilitators. Participants reported positive attitudes towards the intervention and that the intervention was appropriate.

Conclusions

The QIC supports local teams to tailor their actions to local context and develop change commitment. Future work will evaluate the effectiveness of the intervention as an adjunct to the National Diabetes Audit.

Peer Review reports

Key messages regarding feasibility

  • In order to develop a feasible intervention, we iteratively applied theory, evidence and stakeholder involvement across multiple phases

  • We found opportunities to enhance feedback recipients’ quality improvement capabilities through improved informational analysis, the tailored selection of improvement actions and the development of organisational commitment.

  • The developed intervention implements theory-, evidence- and stakeholder-informed target behaviours through virtual workshops, virtual outreach and virtual facilitated collaborative meetings.

Background

National audits are a common form of audit and feedback where participants receive information about their clinical performance over a specific time. Audit and feedback leads to modest improvement [1]. There is evidence and theory (e.g. [1, 2]) describing ways to increase the effectiveness of audit and feedback that have the potential to enhance national audits.

Audit and feedback is a complex intervention that seeks to change the behaviour of feedback recipients [1]. There is a lack of evidence about the best method for developing complex interventions [3], however there are common elements to existing best practice principles (e.g. [3,4,5,6,7,8]). These elements include: clarity of perspective, the use of evidence, theory and stakeholder involvement and a defined approach to implementation. The use of iterative methods to develop complex interventions is recommended; for example, O’Caithain et al. [3] highlighted that co-design can provide a method through which repeated cycles of assessment, review and refinement involving stakeholders are undertaken. It is important both to specify the content of interventions in order to inform delivery, evaluation, refinement and replication [9] and to describe the intervention development process [10]. The template for intervention description and replication (TIDieR) provides a framework to describe the content of complex interventions [11]. Behaviour change techniques are observable and replicable active components that can be used to describe the content of behaviour change interventions [12].

This paper describes the multi-phase development of an intervention to enhance the effectiveness of national audit through the iterative integration of evidence, theory and stakeholder input. There are approximately 60 national audits in England [13]. We sought to develop enhancements that might be transferable between national audits. During development, the intervention was tested both as an adjunct to the National Audit of Dementia [14] and the National Diabetes Audit [15]. The National Audit of Dementia is undertaken approximately every 2 years by a multi-agency group led by the Royal College of Psychiatrists. The dementia audit is voluntary, although hospitals are required to report on whether they have taken part, and involves manual data collection. The dementia audit describes the care provided to approximately 10,000 patients, as well as staff and carer experience and organisational information (e.g. training, policies). The National Diabetes Audit (NDA) provides feedback describing clinical performance by primary and secondary care teams, including specialist foot care, pregnancy and transition teams. It describes the care provided to approximately 3.6 million patients. The diabetes audit contains both mandatory and voluntary components and both automatic and manual data collection. NDA feedback is given quarterly, annually or two-yearly, depending upon the specific element of diabetes care being audited. Both the dementia and diabetes audits are commissioned nationally on behalf of NHS England and seek to improve care. For both audits, feedback is as an Excel spreadsheet and a national-level report, the dementia audit also provides a hospital level report. Almost all hospitals in England take part in each audit. Intervention development was iterative, with the results of earlier phases informing subsequent phases. To reflect the development method, this paper describes the methods and results for each phase subsequentially.

Method and results

Overview

The work was undertaken in six phases: in phase 1, we aimed to develop a rich description of one audit (the National Audit of Dementia) in different hospitals. This description was undertaken to inform the development and testing of enhancements to that audit (phases 2–4). Transferability was considered through later work to adapt the enhancements to a second national audit (the National Diabetes Audit; phase 5) and further exploration of feasibility (phase 6).

Figure 1 represents the study design and illustrates the integration of evidence, theory and stakeholder input across the study’s six phases. Stakeholder analysis [16] identified categories of stakeholders including patients, carers, clinical staff, policy-makers, clinical auditors, regulators, professional bodies, audit provider organisations and researchers. However, to specify which particular stakeholders to involve and to decide and justify a method of involvement, it was necessary to be clear about the reason for involvement [3]. Stakeholder involvement sought to improve feasibility and acceptability through discussion that drew upon diverse perspectives of people anticipated to be involved in potential enhancements. To achieve this goal, stakeholder recruitment involved identifying diverse organisations (organisations that differed in regulator rating and size) and inviting the involvement of clinical leads and clinical audit leads. Carers were sought through two charities (Alzheimers Society and Young at heart) and two organisations that support patient and public involvement in research (Voice North and the Dementias and Neurodegenerative Diseases Research Network). Further stakeholders were identified based upon their role for: the regulator (n = 1), relevant professional bodies (n = 2), audit provider organisation (n = 1), audit commissioner (n = 1) and behaviour change researchers (n = 3). During phases 1–4, stakeholder involvement was through both a co-design group (involving three carers, three hospital clinical leads and three hospital clinical governance leads) and through an advisory group (n = 9; a patient, and representatives from the regulator, relevant professional organisation, national audit provider organisation, national audit commissioner and behaviour change researchers), with the research team providing a conduit between the two groups.

Fig. 1
figure 1

An overview of the study design indicating key inputs to intervention development [17]

The structure for stakeholder involvement recognised the potential impact of power upon willingness to contribute diverse perspectives (e.g. clinicians in a co-design group, national audit provider, regulator and professional body representatives in advisory different group); co-design group members were supported to provide diverse perspectives through pre-workshop discussions and within-workshop ice-breaker exercise that described and celebrated differences in perspective; facilitation sought to support stakeholders to articulate and explore differences in perspective [18]. The co-design group met face-to-face 14 times (28 h). The advisory group provided input through one whole group meeting and later sub-group meetings. Stakeholder involvement in phases 5 and 6 was through iterative discussions with a group of experts by experience (n = 12) and governance groups associated with the National Diabetes Audit.

The dementia audit was selected as it was a national priority [19]. During the study of the dementia audit, Sykes became quality improvement lead for the National Diabetes Audit (NDA). Feedback on this quality improvement work provided the opportunity to adapt the intervention developed alongside the National Audit of Dementia to diabetes care.

Ethical approval for this study was gained both from the Newcastle University Faculty of Medical Sciences and the National University of Ireland (Galway) Ethics Committees.

Phase 1 method

The aim of phase 1 was to describe what happens when a national audit reaches the hospital. We studied six hospitals in four diverse English National Health Service organisations over a 16-month period. We undertook documentary analysis (n = 39), semi-structured interviews (n = 32) and 44 h of observations of healthcare workers involved in the response to feedback form the national audit of dementia. Data were analysed using framework analysis and findings were presented iteratively to a co-design group (8 workshops; 16 h) who used them to develop a description of the hospital response to the national audit [18].

Phase 1 results

We found hospitals staff invested considerable time collecting data and that people collecting data interpreted the current practice and the audit standard differently when assessing compliance. There were delays between data collection and receiving feedback, and when feedback arrived at the hospital it was reviewed by approximately three people, typically two clinical leads and a positional leader with clinical governance responsibility. The clinical leads reviewed the report, often focussing on the national recommendations, and developed a local action plan. There was little evidence that this action plan was informed by local performance or led to the selection of actions aligned to an analysis of influences. The action plan was reviewed at quality assurance committees, where committee members collectively determined a plan to improve performance. The results from phase 1 are reported in detail elsewhere [18].

Phase 2 method

The aim for phase 2 was to identify and specify enhancements to the national audit. These aims were met through three co-design group workshops (6 h), where inputs to the discussion included the primary data from Phase 1, a reminder of the co-design group members’ stated pre-study views, and presentation of the findings from a systematic review of audit and feedback [1] and theory-informed hypotheses [20]. The facilitator made notes about the discussion on flipcharts during the workshops and in a reflective diary after the workshop. These notes were transcribed after each workshop.

In workshop 9, the co-design group selected a potential target for enhancement using nominal group technique [21]. These potential targets were narrowed by considering feasibility in consultation with the advisory group and research team. In workshop 10, the co-design group discussed the feedback from the advisory group, and further defined the outcome through prompts such as, “what would better action planning lead to?” and “how would you assess whether an action plan was a good one?”.

The research team identified evidence- and theory-informed [1, 20] proposals that might influence the identified outcome (e.g. present loss-framed data; address trust and credibility to increase audit effectiveness) and considered their theoretical coherence. In workshop 11, we presented the proposals to the co-design group and asked: whether they agreed with the proposal, and whether they thought that it might lead to the identified outcome. Group members were then placed into three groups, with each group including a carer, clinical lead and clinical governance lead. The co-design sub-groups completed a task to sort the proposals by categories in the TIDieR framework [11], for example, cluster the proposals according to who could do them, where and when they could be done physically, using post-it notes. Each subgroup presented back to the whole group as part of a discussion that sought to explore differences.

After workshop 11, the notes were transcribed, and the data entered into a table capturing adapted elements of the TIDieR framework (we differentiated between when and how much and combined planned and actual fidelity). The information in the table was then used to describe narratively a series of specified ‘steps’ (Table 1) that captured the aim of the step and who was to do what, when and how. These steps were later amended after phase 3, so as to reflect feedback on influences upon implementation, and amended further following feasibility test in phases 5 and 6.

Table 1 A description of each of the seven specified steps

Phase 2 results

The co-design group prioritised data collection, feedback and action planning. Consultation with the advisory group and further discussion with the co-design group led to the prioritisation of action planning due to contractual constraints on amending either data collection or feedback delivery.

Facilitated discussion with the co-design group led to them specifying the initial outcome sought from enhancing action planning: an action plan that targets poor performance, describes why not doing well, contains actions which are relevant, actionable, specific, time-bound and measurable. The group further defined each of these terms (Table 7 in Appendix 1).

The research team identified that the following evidence- and theory-informed intervention target behaviours aligned to this outcome: focus on practices with low baseline performance [1]; address recipient priorities; develop trust and credibility in the results; present meaningful comparisons; Use cognitive influences by presenting loss-framed data; identify and address barriers to improved performance; develop a conceptual model of the link between the action and improved care; involve people with control of performance; and consider the opportunity cost of the improvement action [20]. The co-design group considered these proposals, agreeing with each except for the use of loss-framing which they said would not be acceptable. The group said that this lack of acceptability would make it difficult to implement and hinder the implementation of the other enhancements. The research team reviewed the theoretical coherence of the proposed target behaviours, identifying that the development of commitment and informational appraisal to select actions resonated with the theory of organisational readiness for change [23]. The co-design group sorted the intervention targets according to the adapted TIDieR framework criteria. This exercise determined that the target audience was clinical and clinical governance leads. The research team used the notes from the workshop to develop the ‘steps’ described in Table 1; some steps were to be taken simultaneously.

Phase 3 method

The aim for phase 3 was to develop a strategy to implement the seven steps. The normalisation process theory (NPT) toolkit [24] was used as a heuristic device to explore co-design group members’ reported beliefs about influences upon the implementation of the steps. During workshop 12, each specified step was reviewed individually by co-design group members, and then discussed by the group to surface potential influences upon implementation.

After workshop 12, the research team used the stakeholders’ responses to select and specify the implementation strategy by identifying mechanisms (e.g. coherence) and ingredients (e.g. communal specification) which might affect implementation of each step. We selected a potential type of strategy (educational workshop) based on potential to address the identified ingredients. To develop the content and delivery of the strategy, we drew upon the notes from Workshop 12 and a review describing factors associated with increased effectiveness of the selected strategy (educational workshop [22]). We coded the behaviour change techniques in the draft materials, reviewed consistency, discussed disagreements to seek agreement, and described the intervention in an intervention manual. To review the coherence of the intervention we described it in a logic model which aims to describe the alignment from BCTs, NPT mechanisms, target behavioural outcomes and determinants to patient outcomes. In workshop 13, the research team presented the content and delivery of the intervention to the co-design group. The co-design group suggested amendments. Strategy development in workshop 13 led to the inclusion of an additional strategy (educational outreach), and the consideration of further evidence [25]. We amended the manual and logic model based upon their feedback.

Phase 3 results

Influences upon the implementation of each step were identified. Looking across steps, we identified the key normalisation process theory (NPT) mechanisms were coherence and cognitive participation and proposed that these may be addressed through an educational workshop and educational outreach (Table 2).

Table 2 Influences upon the implementation of each specified step

The research team also drew upon existing evidence that educational workshops with high attendance, a mixture of interactive and didactic content and that make the target behaviours less complex may be more effective [21]. MS drafted the educational workshop by developing content that addressed the identified ingredients for each step. The draft materials were presented to the research team who proposed amendments to the manual; for example, to avoid social comparison that might undermine the implementation of the target behaviours by removing the description that the target behaviours were not undertaken at the phase 1 study sites. The research team agreed the coherence of the association between NPT mechanism and BCT ingredient. The amended materials were presented to the co-design group to consider face validity. The co-design group proposed further amendments; for example, to amend the workshop booklet to prompt participants to capture tasks and to group the list of potential improvement actions [22] so as to reduce participant burden. The intervention manual was amended in response to this feedback. The group agreed with the proposal to call the intervention, ‘logical improvement planning’.

Phase 4 method

The aim for phase 4 was to refine the intervention based upon an exploration of fidelity, feasibility, acceptability and appropriateness [9, 26]. We delivered the educational workshop to the target audience (clinical leads and clinical governance lead) at two hospitals within one NHS organisation. Semi-structured interviews explored fidelity of enactment, the acceptability and appropriateness of the enhancements and the acceptability and feasibility of the implementation strategy. The data were analysed using thematic analysis [27]. The analysed findings were presented to the co-design group, the research team and members of the advisory group. The co-design group were asked both to describe their views on whether and how the intervention should be amended and later to comment on proposed changes identified by the research team and members of the advisory group. The intervention was amended based upon their feedback.

Phase 4 results

We delivered the intervention to four healthcare workers (three clinical leads and one clinical governance lead) in September 2019. We took notes (Appendix 2) and interviewed two clinical leads in March 2020. We sought to interview the other attendees and other potential participants involved in the organisational response to the national audit. One potential participant was willing to discuss the work but did not consent for use of the data. This discussion was used to sense-check findings from the earlier interviewees. Further interviews were prevented by the hospital’s pandemic response.

In exploring fidelity of receipt, participants were able to describe intervention content relating to the identification of opportunities for improvement and the need for work to explore influences; link performance to local priorities; use comparators; reflect existing workstreams. There was evidence for fidelity of enactment and acceptability (Table 3).

Table 3 Evidence for fidelity of enactment and acceptability

Interview participants described potential enhancements to the intervention: to create opportunities to learn from others and adding in a monitoring mechanism to the content; to provide those leading the work with feedback about the impact of their improvement strategies:

“What works in other places? And how do we get there? So we have our data already now. We look at where it works well and what have they done to address these things. And then we start implementing those in ours. So that’s how standard approach works…you would need another audit cycle here at PDSA (plan-do-study-act) to prove which of those interventions have actually worked” (Interviewee 2)

In addition to the interview data, notes made during intervention delivery were presented to the co-design group in workshop 14. The group identified that the participants’ preconceptions about the intervention (e.g. “I went into the meeting with some scepticism”) may have adversely affected participants’ initial buy-in. They proposed changing the name of the intervention. They agreed that new components to gain feedback about the effectiveness of the intervention and support sharing between teams may further enhance the intervention. We changed the name to ‘Quality Improvement Collaborative’ to address participants’ reported prior misunderstanding about the aim of the intervention and to reflect the use of shared learning. In response to findings that participants moved quickly to selecting solutions, it was agreed that additional content should take them through ‘within-workshop’ activity to undertake a more structured analysis of influences upon behaviour and an exercise to identify stakeholders. The group felt that bringing more of the work into the workshop, rather than training within the workshop for later independent completion, may also help reduce the reported time burden.

The logic model and materials were amended to reflect the changes agreed with the co-design group: In response to finding a narrow range of influences drawn upon by participants, we introduced an exercise to select influences using the theoretical domains framework [28]. In response to the finding that participants jumped to stakeholders associated with solutions, we introduced a new exercise to identify stakeholders, consider their influence and interest [16] and identify strategies to increase their interest and influence through exploration of their priorities and use of comparators. New content was developed to support participants to develop feedback mechanisms and monitor changes over time. We sought to increase collaboration between teams, through asking teams to describe their plans, facilitating group discussion and introducing monthly virtual facilitated meetings to describe experience, support both group problem solving and resource sharing (e.g. previously produced patient leaflets, guidelines or business cases). The addition of peer description of strategies and apparent burden of the ‘expert recommendations for implementing change’ (ERIC; [22]) exercise led to the exercise to review 64 potential strategies being removed.

Phase 5 method

The aim for phase 5 was to adapt the intervention to a different national audit. The National Diabetes Audit provides feedback describing the care provided to approximately 3.6 million people by more than 6000 general practices and over 100 specialist diabetes teams in England and Wales [15]. From 2017 to 2019, the National Diabetes Audit (NDA) sought to increase improvement through delivery of four quality improvement collaboratives. These collaboratives, led by MS, used an adapted Breakthrough Series method [29] by supporting clinical teams to engage stakeholders, set aims, select priorities, identify and align actions and monitoring impact. Face-to-face workshops and teleconferences sought to implement the Breakthrough Series practices. For each of the four 2017–2019 QICs, there was an end-of-collaborative workshop for teams to present their work, describe what they had learned and give both written and verbal feedback on the approach. Approximately 160 participants from 70 teams joined these face-to-face meetings, with each team providing feedback on their work. The NDA QIC delivery team collated and discussed participant feedback [30]. Recognition of the need to amend the NDA QIC created the opportunity to adapt the dementia QIC to support diabetes audit feedback recipients.

Adaptation of the intervention from dementia to diabetes involved revising the dementia QIC intervention manual to account for:

  • Differences in the clinical topic

  • NDA contractual requirements

  • External context

  • Recent evidence [31]

The adapted content and delivery was discussed with the NDA quality improvement team, the NDA experts by experience group and the NDA Executive. We revised the manual based upon their feedback.

Phase 5 results

Table 4 summarises the adaptation work. Virtual delivery provided the opportunity to split the full-day workshop into three two-hour workshops to reduce burden and create opportunity for participating teams to consider the content between sessions. The revised intervention was presented to the NDA experts by experience group and the NDA Executive, both of which supported the proposed approach.

Table 4 A summary of the aim, methods and results for each intervention development phase
Fig. 2
figure 2

Intervention logic model after phase 6

Phase 6 method

The aim for phase 6 was to refine the intervention based upon a further feasibility study exploring fidelity of delivery, appropriateness and acceptability of the intervention. The work was undertaken by researchers independent of the intervention development team (LM, EOH and JMc). To evaluate fidelity of delivery, the intervention manual and materials were coded for behaviour change techniques (BCTs) [12] and compared to the BCTs delivered during the online delivery of the workshop. Approximately 10% of the materials were double coded by EOH and LM and intercoder reliability was calculated. The intervention was delivered by the multidisciplinary NDA team (NDA Clinical lead, previous 2017–2019 QIC participants, and facilitator (MS)) to two cohorts: one targeting improvement in type I diabetes; and one targeting improvement in Type II diabetes. Delivery of both type I and type II interventions was recorded and coded (EOH identified the BCTs delivered in the type I intervention; LM identified the BCTs delivered in the type II intervention). Coding discrepancies were discussed with a third author (JMc) in order resolve disagreements. The percentage of BCTs delivered as specified in the manual and materials for both type I and type II diabetes was calculated. Semi-structured interviews with intervention recipients explored components of the theoretical framework of acceptability [32]. All recipients were invited to be interviewed. Thematic analysis [27] was used to analyse the interview data. The findings were presented to the research team who proposed refinements. The design of the refined collaborative, and examples of the work being undertaken by participants, was presented to the NDA Experts by experience group and the NDA Executive. The manual was further revised based upon this discussion.

Phase 6 results

The intervention was delivered to 17 teams from across England and Wales, as part of two cohorts focussing on: reduction in team median HbA1c in people with type I diabetes (10 teams, observed by EOH); reduction of cardio-vascular risk in people with type II diabetes (7 teams, observed by LM). The BCTs identified in the manual were delivered to each cohort (Table 5). There was 83% agreement on the initial double coding of BCTs identified in intervention manuals and materials. The review of the intervention manual identified twelve BCTs. Table 5 describes the BCTs identified in the manual and within intervention delivery.

Table 5 BCTs observed in written materials and intervention webinars

There were two minor losses of fidelity: a number of BCTs intended to be delivered in session 1 were, instead, delivered in session 2, due to time constraints; and BCTs present in the manual were occasionally delivered at a different time from when indicated, albeit within the same workshop. Five healthcare professional intervention recipients were interviewed. Interviewees described that the intervention was acceptable and appropriate, describing a positive affective attitude and that the burden may be worth the opportunity cost (Table 6).

Table 6 Example quotes from phase 6 feasibility study

In reviewing the content and work of the NDA QIC, the NDA Experts by experience group were supportive of the current design. They asked for increased content to support teams to engage with local service users. Intervention content in relation to stakeholder engagement was extended in response to their feedback. The intervention is described in the TIDieR checklist (Table 9 in Appendix 4).

Discussion

We undertook multiphase development of an intervention to enhance national audit. In phase 1, we used multiple qualitative methods to describe what happens when a national audit reaches the hospital. Phases 2 and 3 used co-design methods to select the target for enhancement and subsequently to specify an intervention to implement the enhanced action planning process. In phase 4, we explored the intervention as an adjunct to the national audit of dementia and subsequently refined it. In phase 5, we adapted the intervention to a different audit, the national diabetes audit. Phase 6 involved a second exploration of the intervention, as an adjunct to the national diabetes audit, and led to further refinement. The resultant intervention is a specified national audit quality improvement collaborative involving virtual workshops, virtual outreach and virtual facilitated collaborative meetings led by a multidisciplinary team able to deliver the BCT ‘credible source’.

Complex interventions may be flexible [8]. The QIC intervention has been manualised to support fidelity of delivery across deliverers and been tested through both face-to-face and virtual delivery. The intervention supports tailoring, whereby participants are supported to analyse their local context using the Theoretical domains framework and to select improvement strategies aligned to their analysis. The analysis of influences happens formally within one exercise started during the second virtual workshop, whereas the local context may be dynamic (e.g. [33]). Introduction of the feedback mechanism and on-going facilitated virtual meetings provide a prompt to address emergent contextual influences. This tailoring work is proposed to support the intervention to be applicable across contexts, a proposition that will be explored further in the future process evaluation.

Across our intervention development Phases, and within quality improvement work (e.g. [34]), the issue of time is an important consideration. In phase 3, the exploration of influences upon implementation identified that clinicians leading the hospital response to the national audit may not have the time to undertake the identified quality improvement practices. The co-design group proposed this might be addressed by changing those undertaking the work. A further suggestion by both the co-design group and participants in phase 4 was for the clinical lead to negotiate time with the clinical director, although phase 4 participants indicated this might be more to gain recognition of the time costs rather than actually having time released for the work. Phase 4 participants described that the burden of the intervention may be worthwhile and a suggestion that their personal goals influenced their assessment of the intervention.

To address the time burden, we provide potential participants with information about the time costs during recruitment and considered which tasks could be undertaken by different actors not currently involved in the response to national audit, specifically the organisational improvement team to undertake observations, librarian to undertake systematic reviews or clinical governance team to identify priorities described in organisational documents. The workshops duration was extended so that enacting the QIC practices (e.g. specify goal, identify stakeholders) was undertaken within protected education time. The intervention includes content both to implement the negotiation of time through job plans and to influence participants’ interpretation of the burden through previous QIC participants describing that improvements make the time worthwhile. Previous work has found quality improvement collaboratives to be cost-effective [35]. Future work will investigate the cost-effectiveness of the National Audit QIC.

The work in phase 6 to code BCTs within the intervention highlighted the need to distinguish between active ingredients at different levels, for example:

  • The target behaviours being implemented as part of the enhanced local improvement work, for example, specifying the aim for the improvement delivers the BCT goal setting (outcome)

  • The BCTs delivered in order to implement the new behaviours (e.g. restructuring the social environment to analyse influences upon care)

  • The BCTs that support acceptability of the intervention (e.g. credible source giving information about information about health consequences to increase buy-in to the intervention).

Strengths and limitations

This paper describes iterative intervention development that draws upon evidence, theory, stakeholder views, gives specific consideration of implementation and explores the feasibility in two contexts. Intervention development has been described in line with guidance (Table 9 in Appendix 4). The work exemplifies multi-purpose application of theory: the articulation of the programme theory as a logic model served to support exploration about how the intervention may influence outcome. To develop the intervention content, we drew upon earlier theory-informed proposals describing influences upon the effectiveness of audit and feedback [20]. To review coherence, we considered the logic model in the context of earlier theory. To explore influences upon implementation, we used a theory-informed toolkit [19].

There are limitations to the work. The first feasibility study, and to a lesser extent, the second feasibility study, faced challenges recruiting interview participants. It is anticipated that this reflected participant availability during the pandemic, and perhaps also that participants were undertaking additional work as a result of the intervention. Participant responses point towards the acceptability, appropriateness and feasibility of the intervention, initial findings which will be built upon alongside later work to test effectiveness. The design of the process evaluation will take into account an evaluability assessment [8] by placing greater emphasis on methods with lower participant burden (observation and documentary analysis). The method, illustrated in Fig. 1, suggests a linear process. In reality, there were feedback loops, for example, the phase 3 work to identify influences upon implementation led to the phase 2 content being revisited (e.g. to remove loss-framing). Similarly, it is anticipated that the logic model illustrates the stronger relationships between components, but it is likely that there are lesser interactional effects; for example, cognitive participation in one target behaviour may influence buy-in to another, consideration of existing work may influence both the assessment of opportunity cost proposed to affect change commitment and the informational appraisal of implementation capability. The iterative development process has developed and refined the intervention, it is anticipated that the intervention will be refined further through later learning.

The intervention is a quality improvement collaborative, containing educational workshop and outreach strategy for local team leads and facilitation of a learning collaborative. Quality improvement collaboratives are a common method to improve healthcare [31]. There is evidence that they may be effective, but a lack of justification for the content, incomplete reporting and multiple sources of bias undermine interpretation of the results [36, 37]. A strength of the current paper is the description from the selection of the target for enhancement that built upon inductively developed description of current process, through to a coherent, specified, manualised and feasible intervention.

Conclusion

We undertook iterative co-design work, building upon inductively identified opportunities to enhance national audit through the systematic use of theory-informed proposals, evidence and stakeholder input to develop an intervention. We explored the feasibility of the intervention as an adjunct to two national audits. The intervention seeks to increase feedback recipients’ quality improvement capabilities by implementing target behaviours consistent with the organisational readiness to change theory [23], such that local teams tailor improvement actions to their local context and develop organisational commitment. We plan to evaluate the effectiveness of the intervention as part of a cluster randomised trial and process evaluation. The planned study will investigate whether NDA plus the National Audit QIC leads to greater improvement in patient outcomes compared to NDA feedback alone. The theory-informed process evaluation will explore diabetes specialist teams’ engagement, implementation, fidelity and tailoring. The economic evaluation will micro-cost the QIC, estimate cost-effectiveness of NDA feedback with QIC and estimate the budget impact of NHS-wide QIC roll out. Further planned work will explore adaptation through work to adapt the intervention to three further audits in a different national context.

Availability of data and materials

The datasets generated during and/or analysed during the current study are not publicly available in order to maintain the anonymity of participants.

Abbreviations

BCT:

Behaviour Change Technique

HbA1c:

haemoglobin A1c or glycated haemoglobin, a measure of blood glucose levels

NDA:

National Diabetes Audit

NPT:

Normalisation Process Theory

PDSA:

Plan, do, study, act

ERIC:

Expert recommendations for implementing change

QIC:

Quality Improvement Collaborative

TIDieR:

Template for intervention description and replication

4AT:

Rapid clinical test for delirium

References

  1. Ivers N, Jamtvedt G, Flottorp S, Young JM, Odgaard-Jensen J, French SD, et al. Audit and feedback: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev. 2012;(6):CD000259. https://doi.org/10.1002/14651858.CD000259.pub3.

  2. Brown B, Gude WT, Blakeman T, van der Veer SN, Ivers N, Francis JJ, et al. Clinical performance feedback intervention theory (CP-FIT): a new theory for designing, implementing, and evaluating feedback in health care based on a systematic review and meta-synthesis of qualitative research. Implement Sci. 2019;14(1):1–25.

    Article  Google Scholar 

  3. O’Cathain A, Croot L, Duncan E, Rousseau N, Sworn K, Turner KM, et al. Guidance on how to develop complex interventions to improve health and healthcare. BMJ Open. 2019;9(8):e029954.

    Article  Google Scholar 

  4. Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M. Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ. 2008;337:a1655.

    Article  Google Scholar 

  5. Czajkowski SM, Powell LH, Adler N, Naar-King S, Reynolds KD, Hunter CM, et al. From ideas to efficacy: the ORBIT model for developing behavioral treatments for chronic diseases. Health Psychol. 2015;34(10):971.

    Article  Google Scholar 

  6. Kok G, Gottlieb NH, Peters GJ, Mullen PD, Parcel GS, Ruiter RA, et al. A taxonomy of behaviour change methods: an intervention mapping approach. Health Psychol Rev. 2016;10(3):297–312.

    Article  Google Scholar 

  7. Bleijenberg N, Janneke M, Trappenburg JC, Ettema RG, Sino CG, Heim N, et al. Increasing value and reducing waste by optimizing the development of complex interventions: enriching the development phase of the Medical Research Council (MRC) Framework. Int J Nurs Stud. 2018;79:86–93.

    Article  Google Scholar 

  8. Skivington K, Matthews L, Simpson SA, Craig P, Baird J, Blazeby JM, et al. A new framework for developing and evaluating complex interventions: update of Medical Research Council guidance. BMJ. 2021;374:n2061.

    Article  Google Scholar 

  9. Bellg AJ, Borrelli B, Resnick B, Hecht J, Minicucci DS, Ory M, et al. Enhancing treatment fidelity in health behavior change studies: best practices and recommendations from the NIH Behavior Change Consortium. Health Psychol. 2004;23(5):443.

    Article  Google Scholar 

  10. Duncan E, O’Cathain A, Rousseau N, Croot L, Sworn K, Turner KM, et al. Guidance for reporting intervention development studies in health research (GUIDED): an evidence-based consensus study. BMJ Open. 2020;10(4):e033516.

    Article  Google Scholar 

  11. Hoffmann TC, Glasziou PP, Boutron I, Milne R, Perera R, Moher D, et al. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ. 2014;348:g1687.

    Article  Google Scholar 

  12. Michie S, Richardson M, Johnston M, Abraham C, Francis J, Hardeman W, et al. The behavior change technique taxonomy (v1) of 93 hierarchically clustered techniques: building an international consensus for the reporting of behavior change interventions. Ann Behav Med. 2013;46(1):81–95.

    Article  Google Scholar 

  13. Foy R, Skrypak M, Alderson S, Ivers NM, McInerney B, Stoddart J, et al. Revitalising audit and feedback to improve patient care. BMJ. 2020;368:m213.

    Article  Google Scholar 

  14. Royal College of Psychiatrists. National Audit of Dementia care in general hospitals 2012-13: second round audit report and update. 2013.

    Google Scholar 

  15. NHS Digital. National Diabetes Audit, 2020-21 quarterly report. 2021.

    Google Scholar 

  16. Bryson JM, Patton MQ, Bowman RA. Working with evaluation stakeholders: a rationale, step-wise approach and toolkit. Eval Program Plann. 2011;34(1):1–2.

    Article  Google Scholar 

  17. O’Brien MA, Rogers S, Jamtvedt G, Oxman AD, Odgaard-Jensen J, Kristoffersen DT, et al. Educational outreach visits: effects on professional practice and health care outcomes. Cochrane Database Syst Rev. 2007;2007(4):CD000409.

    PubMed Central  Google Scholar 

  18. Sykes M, Thomson R, Kolehmainen N, Allan L, Finch T. Impetus to change: a multi-site qualitative exploration of the national audit of dementia. Implement Sci. 2020;15(1):1–3.

    Article  Google Scholar 

  19. Department of Health. Prime Minister’s challenge on dementia 2020. 2015. Available at: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/414344/pm-dementia2020.pdf.

    Google Scholar 

  20. Colquhoun HL, Carroll K, Eva KW, Grimshaw JM, Ivers N, Michie S, et al. Advancing the literature on designing audit and feedback interventions: identifying theory-informed hypotheses. Implement Sci. 2017;12(1):1–0.

    Article  Google Scholar 

  21. Mc Sharry J, Fredrix M, Hynes L, Byrne M. Prioritising target behaviours for research in diabetes: Using the nominal group technique to achieve consensus from key stakeholders. Res Involv Engagem. 2016;2(1):1–9.

    Article  Google Scholar 

  22. Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, et al. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implement Sci. 2015;10(1):1–4.

    Article  Google Scholar 

  23. Weiner BJ. A theory of organizational readiness for change. Implement Sci. 2009;4(1):1–9.

    Article  Google Scholar 

  24. May CR, Finch T, Ballini L, MacFarlane A, Mair F, Murray E, et al. Evaluating complex interventions and health technologies using normalization process theory: development of a simplified approach and web-enabled toolkit. BMC Health Serv Res. 2011;11(1):1–1.

    Article  Google Scholar 

  25. Forsetlund L, Bjørndal A, Rashidian A, Jamtvedt G, O’Brien MA, Wolf FM, et al. Continuing education meetings and workshops: effects on professional practice and health care outcomes. Cochrane Database Syst Rev. 2009;2009(2):CD003030.

    PubMed Central  Google Scholar 

  26. Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health Ment Health Serv Res. 2011;38(2):65–76.

    Article  Google Scholar 

  27. Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3(2):77–101.

    Article  Google Scholar 

  28. Atkins L, Francis J, Islam R, O’Connor D, Patey A, Ivers N, et al. A guide to using the Theoretical Domains Framework of behaviour change to investigate implementation problems. Implement Sci. 2017;12(1):1–8.

    Article  Google Scholar 

  29. Institute for Healthcare Improvement. The breakthrough series: IHI’s collaborative model for achieving breakthrough improvement, IHI Innovation Series white paper. Cambridge: Institute for Healthcare Improvement; 2003. Available at http://www.IHI.org.

  30. Sykes M, Berry A, Colling S, Young B. Report on the National Diabetes Audit (NDA) quality improvement collaboratives. London: Diabetes UK; 2020.

    Google Scholar 

  31. Zamboni K, Baker U, Tyagi M, Schellenberg J, Hill Z, Hanson C. How and under what circumstances do quality improvement collaboratives lead to better outcomes? A systematic review. Implement Sci. 2020;15:1–20.

    Article  Google Scholar 

  32. Sekhon M, Cartwright M, Francis JJ. Acceptability of healthcare interventions: an overview of reviews and development of a theoretical framework. BMC Health Serv Res. 2017;17(1):1–3.

    Article  Google Scholar 

  33. Rutter H, Savona N, Glonti K, Bibby J, Cummins S, Finegood DT, et al. The need for a complex systems model of evidence for public health. Lancet. 2017;390(10112):2602–4.

    Article  Google Scholar 

  34. Stephens TJ, Peden CJ, Pearse RM, Shaw SE, Abbott TE, Jones EL, et al. Improving care at scale: process evaluation of a multi-component quality improvement intervention to reduce mortality after emergency abdominal surgery (EPOCH trial). Implement Sci. 2018;13(1):1–6.

    Article  Google Scholar 

  35. De La Perrelle L, Radisic G, Cations M, Kaambwa B, Barbery G, Laver K. Costs and economic evaluations of quality improvement collaboratives in healthcare: a systematic review. BMC Health Serv Res. 2020;20(1):1–0.

    Article  Google Scholar 

  36. Schouten LM, Hulscher ME, van Everdingen JJ, Huijsman R, Grol RP. Evidence for the impact of quality improvement collaboratives: systematic review. BMJ. 2008;336(7659):1491–4.

    Article  Google Scholar 

  37. Wells S, Tamir O, Gray J, Naidoo D, Bekhit M, Goldmann D. Are quality improvement collaboratives effective? A systematic review. BMJ Qual Saf. 2018;27(3):226–40.

    Article  Google Scholar 

Download references

Acknowledgements

We would like to acknowledge the time, expertise and dedication of the co-production and advisory group members and to thank the study participants. LA acknowledges support from the National Institute for Health Research Applied Research Collaboration South-West Peninsula. TF acknowledges support from the National Institute for Health Research Applied Research Collaboration North-East and North Cumbria.

Funding

This report is independent research arising from a Doctoral Research Fellowship (DRF-2016-09-028) supported by the National Institute for Health Research. The views expressed in this presentation are those of the authors and not necessarily those of the NHS, the National Institute for Health Research or the Department of Health and Social Care.

Author information

Authors and Affiliations

Authors

Contributions

MS, RT, TF, LA and RT were involved in designing the study phases 1-4. MS facilitated the co-production group and collected the data, under supervision from RT, NK, LA and TF. MS and TF coded the data in phase 1. MS and the co-production group synthesised the data in phase 1, under supervision from RT, NK, LA and TF. MS and NK coded the BCTs in phase 4. EOH and LM undertook the interviews, observations and documentary analysis in phase 6 and, with JM, coded the BCTs. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Michael Sykes.

Ethics declarations

Ethics approval and consent to participate

All participants gave consent to participate. Written informed consent was sought from potential participants for interviews and one-to-one observations. For observations of groups, information was given in advance to the Chair of the meeting or senior member of the group, with a request to distribute it to all members. This information included details about the study aims, methods, risks and benefits, and how participants were able to have their data excluded from the study. The project was approved by Newcastle University Faculty of Medical Sciences Ethics Committee (Application: 01266/12984/2017) (phases 1–4) and the National University of Ireland (Galway) Ethics Committee (Phase 6).

Consent for publication

Not applicable.

Competing interests

During the work to describe the national audit of dementia (phase 1), MS became the quality improvement lead for the National Diabetes Audit. There are no other competing interests to declare.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix 1

Table 7

Table 7 Co-design group definitions to terms within the initial outcome of the intervention

Appendix 2

Summarised notes made during phase 4 intervention delivery

During the workshop activity to analyse performance participants explored the impact of data collection process upon performance, having someone involved in data collection in the workshop was valuable;

Participants jumped to solutions and stakeholders associated with solutions, rather than specifying the behaviour before identifying stakeholders and exploring influences;

Negotiating organisational rules about the use of comparators was important; the term ‘cognitive load’ appeared confusing;

Participants drew on memory to triangulate, a prompt to check their memory against data may be useful;

The exercise to review potential strategies from the adapted ERIC list [22] reduced group energy and appeared burdensome;

The logic model exercise appeared both difficult, but valued, by participants;

The workbook used within the workshop was observed to be a valuable support for the graded tasks and provided a reminder after the workshop, but both observation and interviews identified the opportunity to combine exercises and move others to whole group exercises.

Appendix 3

Table 8

Table 8 GUIDED checklist (Duncan et al. 2020) [10]

Appendix 4

Table 9

Table 9 The TIDieR (Template for Intervention Description and Replication) Checklist

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and Permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sykes, M., O’Halloran, E., Mahon, L. et al. Enhancing national audit through addressing the quality improvement capabilities of feedback recipients: a multi-phase intervention development study. Pilot Feasibility Stud 8, 143 (2022). https://doi.org/10.1186/s40814-022-01099-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40814-022-01099-9

Keywords

  • Audit and feedback
  • Quality improvement
  • Intervention development
  • Implementation