Skip to main content

The use of cognitive task analysis in clinical and health services research — a systematic review

Abstract

Background

At times, clinical case complexity and different types of uncertainty present challenges to less experienced clinicians or the naive application of clinical guidelines where this may not be appropriate. Cognitive task analysis (CTA) methods are used to elicit, document and transfer tacit knowledge about how experts make decisions.

Methods

We conducted a methodological review to describe the use of CTA methods in understanding expert clinical decision-making. We searched MEDLINE, EMBASE and PsycINFO from inception to 2019 for primary research studies which described the use of CTA methods to understand how qualified clinicians made clinical decisions in real-world clinical settings.

Results

We included 81 articles (80 unique studies) from 13 countries, published from 1993 to 2019, most commonly from surgical and critical care settings. The most common aims were to understand expert decision-making in particular clinical scenarios, using expert decision-making in the development of training programmes, understanding whether decision support tools were warranted and understanding procedural variability and error identification or reduction. Critical decision method (CDM) and CTA interviews were most frequently used, with hierarchical task analysis, task knowledge structures, think-aloud protocols and other methods less commonly used. Studies used interviews, observation, think-aloud exercises, surveys, focus groups and a range of more CTA-specific methodologies such as the systematic human error reduction and prediction approach. Researchers used CTA methods to investigate routine/typical (n = 64), challenging (n = 13) or more uncommon, rare events and anomalies (n = 3).

Conclusions

In conclusion, the elicitation of expert tacit knowledge using CTA has seen increasing use in clinical specialties working under challenging time pressures, complexity and uncertainty. CTA methods have great potential in the development, refinement, modification or adaptation of complex interventions, clinical protocols and practice guidelines.

Registration

PROSPERO ID CRD42019128418.

Peer Review reports

Background

Decision-making is ubiquitous in clinical practice, as health professionals gather information; evaluate test results; define problems; set treatment goals; start, stop or delay treatment; and advise, refer, admit or discharge patients [1]. Decisions are affected by factors such as their difficulty and familiarity [2]. People have a limited capacity for processing information [3], which makes it increasingly difficult to adequately understand a decision situation and the range of possible actions, the greater the levels and types of uncertainty involved [4, 5]. Any adequate account of clinical decision-making must also deal with how clinicians handle uncertainty, which is also pervasive in medicine [6, 7].

How we recognise, classify and reduce uncertainty is the subject of a vast literature [8, 9] with poor information, inadequate understanding, indeterminacy, complexity, ambiguity, unpredictability of phenomena, conflicting rules and beliefs affecting our grasp on situations and outcomes [8, 10,11,12]. Decision support algorithms often need to be used in combination with other methods of inference [13]; in the absence of tractable problems and shared assumptions about phenomena that are well-defined and relatively objective, such deductive approaches to problem-solving may perform poorly [14,15,16,17]. The theory of bounded rationality predicts that under our computational, environmental and epistemological constraints, we become satisficers, drawing on heuristics to inductively infer optimal, rather than deductively establishing perfect, solutions [18, 19]. In such situations, the tacit knowledge of clinicians — the sort that is not easily defined or learned [20] — is critical as they attempt to synthesise patient values and best, if often still ambiguous, research evidence [21].

As technological advances increase cognitive demands on people [22], it becomes more important to incorporate the cognitive aspects of performance into task protocols and systems in which they make inferences, diagnoses, judgements and decisions [23]. The increasing complexity of healthcare systems is a challenge for health professionals and researchers and a risk for patients [24,25,26,27].

How clinicians use tacit knowledge to make decisions under conditions of uncertainty has been flagged as critical for the development of interventions in areas including emergency medicine [28], prescribing [29], mental health [30], liver cirrhosis [31], urology [32] and the management of care transitions [33]. The new Medical Research Council (MRC) framework calls for the use of novel designs that can help reduce decision-maker uncertainty and to assess the feasibility of interventions through establishing optimal content and delivery [34]. Intervention development requires that one properly formulates a problem, determines needs, examines current practice and context, and models process and outcomes, all of which inform the feasibility of an intervention [35]. Cognitive task analysis is an umbrella term for tools and techniques used in describing the knowledge and strategies that are used in making judgements about situations and goals and making decisions. One objective of CTA methods, and the naturalistic decision-making paradigm from which they derive, is to help experts to express tacit knowledge, enabling researchers and novices to learn and systems to be improved [36]. This makes CTA methods useful in assessing the feasibility and usability of interventions [37,38,39,40,41].

Despite an increasing concern with decision-making, uncertainty and complexity in the health science literature, the utility of CTA methods has received little attention. Their use in clinical decision-making has not been the subject of a systematic overview. They are not amongst the methods discussed by the MRC framework as approaches to developing and evaluating complex interventions [34, 42, 43], although complex interventions often involve decision-making [44,45,46,47], and, in an editorial in this journal, Pat Hoddinott flags the roles of tacit knowledge and fast-and-slow thinking in intervention development [45]. For these reasons, we undertook a systematic methodological review [48] to understand how CTA methods are being used in real-world clinical settings and which objectives others have deemed them useful to address.

Methods

This review was registered on the PROSPERO database on 15 April 2019, ID CRD42019128418 [49], and has been conducted and reported according to PRISMA guidelines [50]. Published primary research studies were eligible if they (1) described the use of CTA methods to understand how (2) qualified clinicians (3) made clinical decisions (4) in real-world clinical settings. Studies were ineligible if (1) they did not use CTA methods; (2) the participants were students, patients or members of the public; (3) there was no decision (just a simple task breakdown) or decisions were non-clinical; or (4) the setting was simulated, rather than a real-world environment. Objectives of studies concerning non-clinical decisions that were excluded are improvement of the physical environment [51], assessing the usability of information technology [52] and workplace modelling [53]. Studies which used CTA methods to test an already developed simulator were excluded; studies which used them to gather data about real-world environments to develop a simulator were included. Conference abstracts, and unpublished literature, were included where eligible. There were no date restrictions on when articles were published, and articles in any language were included. Systematic reviews, evidence-based guidelines, literature reviews, commentaries and opinion pieces were excluded.

An initial MEDLINE scoping search in December 2018 identified that Medical Subject Headings (MeSH terms) were overly sensitive in their retrieval of relevant articles. Screening a random sample of 100 out of 29,380 citations indexed with “task performance and analysis” — the MeSH term most commonly associated with CTA studies in our scoping search, we found no CTA studies. Therefore, we took the unusual step of using only free-text terms in the searches — those which described CTA methods and related terms, e.g. “Applied Cognitive Task Analysis” and “Critical Decision Method” [54]. The full search strategy is outlined elsewhere [49].

The search terms were applied in MEDLINE, EMBASE and PsycINFO, through Ovid. Searches covered studies from database inception (MEDLINE 1966, EMBASE 1947, PsycINFO 1967) to 12th March 2019. Where articles could not be retrieved through copyright libraries, the reviewers contacted authors directly to obtain copies. Four potentially eligible articles were excluded as we were unable to retrieve the full text. Of the remainder, duplicates were removed, and two reviewers used the predefined eligibility criteria to identify eligible citations, working independently and in duplicate. Full documents were retrieved, and assessed for eligibility, with reasons for rejection documented. A third reviewer resolved any disputes.

Descriptive data extraction tables were piloted before use by the two main reviewers. We extracted data into tables to include the following: country of study; clinical setting; study aims/objectives; clinical specialty; CTA methods used [54]; data capture method (interviews, self-reports, observation, automated capture); data targets (past, present or future); whether events described were routine/typical, challenging or rare events/anomalous; generality of events (e.g. job/task, abstract/general or incident/event); and data presentation (e.g. textual descriptions, tables, graphs) [54]. Once data extraction was complete, tables were used to group some of the columns for presentation of results, and filters were applied to summarise counts for each column.

As summary effect measures were not the primary goal of this methodological study, we did not collect data on, or assess, risk of bias either for individual studies or across studies [48, 55].

Results

Summary of included studies

Electronic database searches retrieved 1060 results; a further four were identified through contact with an author to retrieve a full copy and through incidental identification during the data extraction phase. After duplicates were removed, 1053 articles remained, of which 904 were excluded at title and abstract stage. One-hundred and forty-nine articles were assessed at full text, and a further 68 were excluded, leaving 81 articles, representing 80 unique studies for inclusion in the review (Fig. 1).

Fig. 1
figure 1

PRISMA flowchart for studies included

Sixty-eight full-text articles were rejected because they were not looking at decision-making. Instead, these articles had focuses such as developing a description of steps for a procedure/task (n = 22); decisions were non-clinical (n = 13); they took place in a simulation setting (n = 11); decisions were being made by patients, the public or students rather than expert clinicians (n = 9); systematic or literature reviews with no primary research (n = 5); or they did not report on CTA methods being used (n = 4). In addition, copies of 4 articles could not be retrieved after contact with authors.

The five systematic or literature reviews that were excluded did not have a scope that overlapped with the scope of this review. These either focused on training or a specific procedure or setting.

There was an apparent increase in the numbers of published studies using CTA methods to understand expert clinical decision-making between 2006 and 2015 (Fig. 2). The earliest included study was published in 1993 and the most recent in 2019. The majority of studies were carried out in the USA (n = 48), with other countries of origin being the UK (n = 9); Canada (n = 5); Australia (n = 5); Slovakia (n = 3); Ireland (n = 2); and Taiwan (n = 2), and with one study carried out in each of Germany, Iran, France, Norway, Sweden and the Netherlands. CTA studies were carried out in hospitals (n = 60), university or medical schools (n = 11); pre-hospital settings (n = 3); the community (n = 5); and a military training command setting (n = 1). Clinical specialties represented were surgery (n = 30); critical or intensive care, including neonatal and paediatric settings, (n = 11); anaesthesia/trauma medicine (n = 10); general medicine (n = 7); pre-hospital and emergency medicine (n = 6); primary care (n = 6); infectious diseases (n = 4); obstetrics and gynaecology (n = 3); paediatrics (n = 2); interventional radiology (n = 2); neuro-rehabilitation (n =2); pathology (n = 1); pharmacy (n = 1); and diabetes education (n = 1).

Fig. 2
figure 2

Number of studies using CTA methods by year (n = 80*). *Year of publication unknown for one publication. This was a conference poster identified through contact with an author to obtain a copy of a different article

Aims and objectives of included studies

In a large number of studies, the primary aim was to use CTA to understand expert decision-making for management of a particular clinical scenario (n = 35). Other common objectives included using clinician decision-making in the development of training models or educational frameworks (n = 14); understanding management of a procedure to investigate whether a support tool or application is warranted (n = 7); investigating information omission when describing or teaching a procedure (n = 6); creating a framework to compare variability in a procedure (n = 4); error identification or reduction (n = 4); and comparing differences between novices and experts (n = 4). In smaller numbers of studies, objectives included understanding a procedure with the aim of optimisation of that procedure (n = 3); understanding critical steps in a procedure to provide effective teaching and procedure assessment (n = 3); breaking down steps in a task to understand key decision points (n = 2); developing a simulator (n = 1); a methodological investigation looking at how much critical information is gained from each CTA interview over and above information in a gold standard (n = 1); comparing two methods of undertaking the same procedure (n = 1); and determining the number of experts needed to develop gold standard protocols for a procedure (n = 1).

Knowledge elicitation

There are a number of methods of knowledge elicitation described within CTA, as defined in Crandall et al. [54]. Of these methods, most studies included in this review used either critical decision method (CDM) (n = 36) or CTA interviews (n = 30) or both of these (n = 1). Less commonly used CTA methods were the following: hierarchical task analysis (n = 11); task knowledge structures (n = 8), which is a breakdown of task elements including relations between objects and associated actions and how they are represented in an individual’s memory [56]; think-aloud exercises (n = 4); team knowledge audit (n = 2); timeline analysis (n = 2); Systematic Human Error Reduction and Prediction Approach (SHERPA) (n = 2); and the concepts, processes and principles approach (n = 2). Some lesser known, less easily accessed methods which were seen in only one study each, included Patient and Community Engagement Research (PaCER) approach [57], task diagram construction [23], video timing analysis [58], distributed situation awareness [59], incident analysis [60], knowledge analysis [61] and Delphi method for consensus [62].

CTA methods

In the 35 studies which had the aim of understanding expert decision-making in the management of a particular clinical scenario, most used cognitive task analysis interviews (n = 31), often using the critical decision method (n = 23). Other studies aimed to develop a training model or educational framework (n = 14) or to create a framework to compare variability in a procedure (n = 4). Studies with these aims, in addition to using cognitive task analysis interviews (n = 12), also used decomposition approaches such as task knowledge structures (n = 4) and hierarchical task analysis (n = 4), which are seen less often when the main aim is to understand expert decision-making in the management of a particular clinical scenario.

Most studies (n = 42) used a combination of data collection sources. Data were collected through interviews (n = 73), observation (n = 27), think-aloud exercises (n = 10), surveys and questionnaires (n = 6), focus groups (n = 5), review of protocols generated through interviews (n = 4) and SHERPA (n = 3). Additional, less commonly used methods of data collection were comparison of CTA interviews against a gold standard for a specific procedure (n = 2). consensus exercises (n = 2). self-reports (n = 2), automated capture (n = 2), task breakdown (n = 1) and generation of a description whilst simulating a procedure (n = 1).

Data targets and focus of included studies

Data targets, or the focus of the decision-making investigation, were mostly based in the past (n = 41) and in the present (n = 36), with a small number based in the future (n = 2), and insufficient information in one article. Studies using data targets in the past often tended to do so by asking clinicians to recall a specific case or incident [63, 64]. Those studies that used data targets in the present tended to use interviews, observations and think-aloud exercises to look at procedures and practice either in real time, or shortly after the event, using specific cases or think-aloud for a theoretical process. For example, one study used interviews and think-aloud exercises to identify task cues in critical combat medical procedures [65], whilst another used interviews to understand principles guiding clinicians’ decisions whilst undertaking surgical procedures [66]. Future data targets included participants being presented with a clinical scenario of symptomatic gallstones and being interviewed on the decisions they would make for this patient [67]. The events investigated in eligible studies were largely routine/typical events (n = 64), with fewer studies investigating challenging (n = 13) or more uncommon, rare events and anomalies (n = 3).

Articles were predominantly focussed on either an overall job or task (n = 37), such as investigating the clinical knowledge omitted when teaching cricothyrotomy and focussing on the procedure itself rather than specific patient cases [68], or specific incidents or events (n = 35) such as describing cues used by nurses in decision-making when responding to a clinical alarm, involving recall of individual cases [69]. There were a smaller number of studies that looked at more abstract or general tasks (n = 7). These investigated more general settings such as identifying decisions, cues and novice traps in laparoscopic surgery [70]; understanding how emergency physicians and residents experience busy emergency department environments [71]; and understanding the cognitive and physical processes of interventional radiology when eliciting relevant information for user interface design for human-computer interaction [72]. The level of task generality was unclear in one article.

Presentation of results

In addition to textual descriptions as a method of displaying results, most articles used tables, graphs and/or illustrations (n = 66), with others using qualitative models such as flowcharts (n = 30) and simulation; numerical and symbolic models (n = 4), such as task timelines, an objective scale for assessment of technical skills (OSATS) scale; or a taxonomy containing job, task, subtask, element and motion levels. In addition, some articles used hierarchical task analysis decomposition trees, a tree-like diagram breaking down steps and substeps in a procedure, for example, laparoscopic cholecystectomy [73]. Others used workflow models; a step by step instructional guide with decision points and “if” statements [74], decision analysis tables, such as a breakdown of decision points and how these relate to action, cause and outcome for each situation [75] and procedural maps; a mapping diagram linking steps in a straightforward procedure, steps that may change from the original plan, and other steps that can affect a decision whilst undertaking a procedure, such as laparoscopic cholecystectomy [76].

Discussion

This review shows a steady increase in the use of CTA methods in health research over 20 years, with applications such as transferring expert knowledge to novices and providing the decisional architecture for simulators, education or training modules. CTA is more commonly used in surgery and intensive care research and least frequently in pharmacy and interventional radiology. This perhaps reflects an abiding concern with the time pressures, complexity and uncertainty that surround decision-making in these contexts [77,78,79,80,81,82]. The most commonly used CTA methods were interviews, especially using CDM, and slightly less frequently hierarchical task analysis (HTA). There are a number of CTA methods for which no published application was found, although this might reflect the scope of the review. For instance, many HTA studies focused narrowly on task breakdown, without extending analysis to an understanding of decision-making.

This is the largest overview of the use of CTA in clinical and healthcare settings. It expands considerably on a previous review of seven nursing studies, which found CDM a valuable tool for eliciting expert knowledge [83]. We consider a sample of 80 studies adequate to characterise approaches to the use of CTA in health sciences that resort to further databases, and the grey literature would yield diminishing marginal returns [84].

In verbal accounts, experts may leave out up to 70% of components and decision steps needed to successfully perform a task, resulting in intervention protocols that are incomplete or ineffective [68]. That experts cannot themselves articulate what they do, together with optimism bias and groupthink [45], may explain the generally disappointing performance of theory-based [85] and manualised [86] interventions in healthcare and the abiding concern with better specification of interventions [87,88,89]. It is often undesirable to standardise complex interventions [42, 90]. But amid complaints about “cookbook” approaches [91], even the greatest advocates of flexible delivery acknowledge the need for a stable core [92], without which interventions cannot be differentiated [93, 94] or best practice defined and implemented. The current trend in health sciences is a more sophisticated approach to standardisation. Surgeons advocate detailed task decomposition, descriptions of the conditions and limits to standardisation [95] and that intervention stability is assessed before clinical outcomes [96]. Mental health researchers recognise that contingencies require documented adaptations to evidence-based practices [97]. As this more nuanced style of documentation is dependent on integrating best evidence with tacit knowledge [98], CTA methods are ideally placed to support the development, standardisation and adaptation of complex interventions — as well as training in their application. Several of the included studies also show the potential of CTA methods in the creation and modification of other care process specification documents — practice guidelines, decision rules, pathways, care plans and care maps [99]. In the domain of quality improvement, this review provides examples of the use of the HTA and SHERPA approaches for error identification/reduction, CTA interviews and HTA for understanding variation in practice, particularly within multidisciplinary teams [100]. These methods may have application in clinical areas where reducing variation in practice is seen as desirable [101]. CTA methods seem well-suited to elicit, document and share the tacit knowledge used in decision-making. Meta-analyses comparing CTA with other training methods have identified large effects in terms of procedural knowledge and technical performance [102, 103].

The scope of this review did not include studies that primarily focused on simulation studies or those in educational or training settings. We excluded studies focused on the use of CTA methods to evaluate simulations or educational programmes, unless part of their remit also included the decision-making of qualified health professionals in naturalistic settings. During the selection process, we excluded large numbers of studies evaluating simulations, indicating the potential for a systematic overview. Sound meta-analytic evidence already exists for the effectiveness of CTA as a basis for training, compared with other approaches [103], as well as smaller systematic reviews on its application in surgical education [104, 105]. As it is more intensive, a cost-utility analysis, modelling the incremental cost of improved learning outcomes and subsequent performance, may be required to introduce it into practice.

Limitations

This review was purely descriptive, did not assess outcomes of included studies and risk of bias was not assessed. Some readers will feel that reviews must always be evaluative, with endpoints and critical appraisal a necessary condition, but our approach is in line with guidance on methodological reviews [48, 55] and the conduct of many, including in this journal [106,107,108]. Individuals often overestimate their own expertise or vary in their self-assessment according to context [109, 110]. Further research on this topic could assess expertise using evidence of qualifications, track record, experience and competency in applying knowledge and experience to new situations, although these approaches may not be applicable to all forms of expert knowledge [111, 112]. The use of CTA methods appears to have increased over recent years, before a small decrease. It has been reported that there is a sustained interest in CTA methods in relation to training [102], but the number of studies eligible for our study (Fig. 2) is too small to rule out chance variation. The sciences and social sciences represent diverse, competing schools of thought and methodological approaches, and it is common that promising approaches do not become permanently embedded in routine practice [113,114,115,116].

Furthermore, we were unable to retrieve four apparently eligible articles through copyright libraries or attempts to contact authors. On the basis of the titles and abstracts, these articles [117,118,119,120] replicate settings (dental hygiene, endoscopic surgery) and applications (elicitation of expert knowledge, task description) documented in other studies. We took a liberal view of expertise as characteristic of qualified — as opposed to unqualified — health professionals. In applying CTA methods, researchers may consider circumscribing specialist subgroups of qualified clinicians, based on peer nomination, influence and cues such as certification, past performance or test results [121].

Conclusions

The use of CTA methods appears to be increasing in clinical specialties characterised by time scarcity, complexity and uncertainty. However, numbers are small, and this review only looked at CTA methods used in expert clinical decision-making. There are many other applications of CTA not covered in this review, such as novice decision-making, training and education. Whilst the role of CTA methods in education is well documented, they have under-recognised potential in the development, refinement, modification or adaptation of complex interventions and care process specifications.

Availability of data and materials

Not applicable.

References

  1. Ofstad EH, Frich JC, Schei E, Frankel RM, Šaltytė Benth J, Gulbrandsen P. Clinical decisions presented to patients in hospital encounters: a cross-sectional study using a novel taxonomy. BMJ Open. 2018;8:e018042.

    PubMed  PubMed Central  Google Scholar 

  2. Costa A, Duñabeitia JA, Keysar B. Language context and decision-making: challenges and advances. Q J Exp Psychol. 2019;72:1–2.

    Google Scholar 

  3. Kahneman D. Attention and Effort: Prentice Hall; 1973. chapter 2, p. 2–12.

  4. Bossaerts P, Yadav N, Murawski C. Uncertainty and computational complexity. Philos Trans R Soc Lond B Biol Sci. 2019;374:20180138.

    PubMed  Google Scholar 

  5. Phillips J, Morris A, Cushman F. How we know what not to think. Trends Cogn Sci. 2019;23:1026–40.

    PubMed  Google Scholar 

  6. Bhise V, Rajan SS, Sittig DF, Morgan RO, Chaudhary P, Singh H. Defining and measuring diagnostic uncertainty in medicine: a systematic review. J Gen Intern Med. 2018;33:103–15.

    PubMed  Google Scholar 

  7. Matthews RA. The origins of the treatment of uncertainty in clinical medicine - part 2: the emergence of probability theory and its limitations. J R Soc Med. 2020;113:225–9.

    PubMed  Google Scholar 

  8. Han PKJ, Djulbegovic B. Tolerating uncertainty about conceptual models of uncertainty in health care. J Eval Clin Pract. 2019;25:183–5. https://doi.org/10.1111/jep.13110.

  9. Helou MA, DiazGranados D, Ryan MS, Cyrus JW. Uncertainty in decision making in medicine: a scoping review and thematic analysis of conceptual models. Acad Med. 2020;95:157–65.

    PubMed  PubMed Central  Google Scholar 

  10. Buchanan F, Cohen E, Milo-Manson G, Shachak A. What makes difficult decisions so difficult?: an activity theory analysis of decision making for physicians treating children with medical complexity. Patient Educ Couns. 2020. https://doi.org/10.1016/j.pec.2020.04.027.

  11. Lipshitz R, Strauss O. Coping with uncertainty: a naturalistic decision-making analysis. Organ Behav Hum Decis Process. 1997;69:149–63.

    Google Scholar 

  12. Pomare C, Churruca K, Ellis LA, Long JC, Braithwaite J. A revised model of uncertainty in complex healthcare settings: a scoping review. J Eval Clin Pract. 2019;25(2):176–82. https://doi.org/10.1111/jep.13079. Epub 2018 Nov 22. PMID: 30467915.

  13. Dissanayake PI, Colicchio TK, Cimino JJ. Using clinical reasoning ontologies to make smarter clinical decision support systems: a systematic review and data synthesis. J Am Med Inform Assoc. 2020;27:159–74.

    PubMed  Google Scholar 

  14. Epstein RH, Dexter F. Unintended consequences of clinical decision support. Anesth Analg. 2019;128(6):e124. https://doi.org/10.1213/ANE.0000000000004128.

  15. Even Chorev N. Data ambiguity and clinical decision making: a qualitative case study of the use of predictive information technologies in a personalized cancer clinical trial. Health Informatics J. 2019;25:500–10.

    PubMed  Google Scholar 

  16. Mercuri M. How do we know if a clinical practice guideline is good? A response to Djulbegovic and colleagues’ use of fast-and-frugal decision trees to improve clinical care strategies. J Eval Clin Pract. 2018;24:1255–8.

    PubMed  Google Scholar 

  17. Wright A, Wright AP, Aaron S, Sittig DF. Smashing the strict hierarchy: three cases of clinical decision support malfunctions involving carvedilol. J Am Med Inform Assoc. 2018;25:1552–5.

    PubMed  PubMed Central  Google Scholar 

  18. Simon HA. Information processing models of cognition. Annu Rev Psychol. 1979;30:363–96.

    CAS  PubMed  Google Scholar 

  19. Simon HA. A Behavioral Model of Rational Choice. Q J Econ. 1955;69:99.

    Google Scholar 

  20. Polanyi M. The tacit dimension. Knowledge Organ. 1997. https://doi.org/10.1016/b978-0-7506-9718-7.50010-x.

  21. Thornton T. Tacit knowledge as the unifying factor in evidence based medicine and clinical judgement. Philos Ethics Humanit Med. 2006;1:E2.

    PubMed  Google Scholar 

  22. Howell WC, Cooke NJ. Training the human information processor: a review of cognitive models. In: Goldstein IL, editor. Frontiers of Industrial and Organizational Psychology, the Jossey-Bass Management Series and the Jossey-Bass Social and Behavioral Science Series. Training and Development in Organizations. San Francisco: Jossey-Bass; 1989. p. 121–82.

    Google Scholar 

  23. Militello LG, Hutton RJ. Applied cognitive task analysis (ACTA): a practitioner’s toolkit for understanding cognitive task demands. Ergonomics. 1998;41:1618–41.

    CAS  PubMed  Google Scholar 

  24. Carayon P. Human factors of complex sociotechnical systems. Appl Ergon. 2006;37:525–35.

    PubMed  Google Scholar 

  25. Chinburapa V, Larson LN, Brucks M, Draugalis J, Bootman JL, Puto CP. Physician prescribing decisions: the effects of situational involvement and task complexity on information acquisition and decision making. Soc Sci Med. 1993;36:1473–82.

    CAS  PubMed  Google Scholar 

  26. Islam R, Weir C, Del Fiol G. Clinical complexity in medicine: a measurement model of task and patient complexity. Methods Inf Med. 2016;55:14–22.

    CAS  PubMed  Google Scholar 

  27. Szulewski A, Howes D, van Merriënboer JJG, Sweller J. From theory to practice: the application of cognitive load theory to the practice of medicine. Acad Med. 2020. https://doi.org/10.1097/ACM.0000000000003524.

  28. Gamborg ML, Mehlsen M, Paltved C, Tramm G, Musaeus P. Conceptualizations of clinical decision-making: a scoping review in geriatric emergency medicine. BMC Emerg Med. 2020;20:73.

    PubMed  PubMed Central  Google Scholar 

  29. Anderson EC, Kesten JM, Lane I, Hay AD, Moss T, Cabral C. Primary care clinicians’ views of paediatric respiratory infection surveillance information to inform clinical decision-making: a qualitative study. BMJ Paediatr Open. 2019;3:e000418.

    PubMed  PubMed Central  Google Scholar 

  30. Chorpita BF, Bernstein A, Daleiden EL. Driving with roadmaps and dashboards: using information resources to structure the decision models in service organizations. Adm. Policy Ment. Heal Ment Heal Serv Res. 2008;35:114–23.

    Google Scholar 

  31. Barber T, Toon L, Tandon P, Green LA. Eliciting and understanding primary care and specialist mental models of cirrhosis care: a cognitive task analysis study. Can J Gastroenterol Hepatol. 2021;2021:1–9. https://doi.org/10.1155/2021/5582297.

    Article  Google Scholar 

  32. Cary C, Militello L, DeChant P, Frankel R, Koch MO, Weiner M. Barriers to single-dose intravesical chemotherapy in nonmuscle invasive bladder cancer—what’s the problem? Urol Pract. 2021;8:291–7. https://doi.org/10.1097/UPJ.0000000000000174.

    Article  PubMed  PubMed Central  Google Scholar 

  33. Salehi V, Hanson N, Smith D, McCloskey R, Jarrett P, Veitch B. Modeling and analyzing hospital to home transition processes of frail older adults using the functional resonance analysis method (FRAM). Appl Ergon. 2021;93:103392. https://doi.org/10.1016/j.apergo.2021.103392.

    Article  PubMed  Google Scholar 

  34. Skivington K, Matthews L, Simpson SA, Craig P, Baird J, Blazeby JM, et al. A new framework for developing and evaluating complex interventions: update of Medical Research Council guidance. BMJ. 2021;2021(374):n2061.

    Google Scholar 

  35. Bleijenberg N, de Man-van Ginkel JM, Trappenburg JCA, Ettema RGA, Sino CG, Heim N, et al. Increasing value and reducing waste by optimizing the development of complex interventions: enriching the development phase of the Medical Research Council (MRC) framework. Int J Nurs Stud. 2018;79:86–93. https://doi.org/10.1016/j.ijnurstu.2017.12.001.

    Article  PubMed  Google Scholar 

  36. Kahneman D, Klein G. Conditions for intuitive expertise: a failure to disagree. Am Psychol. 2009;64:515–26.

    PubMed  Google Scholar 

  37. Freudenthal A, van Stuijvenberg M, van Goudoever JB. A quiet NICU for improved infants’ health, development and well-being: a systems approach to reducing noise and auditory alarms. Cogn Technol Work. 2013;15:329–45. https://doi.org/10.1007/s10111-012-0235-6.

    Article  Google Scholar 

  38. Nakamura K, Naya Y, Zenbutsu S, Araki K, Cho S, Ohta S, et al. Surgical navigation using three-dimensional computed tomography images fused intraoperatively with live video. J Endourol. 2010;24:521–4. https://doi.org/10.1089/end.2009.0365.

    Article  PubMed  Google Scholar 

  39. Schnittker R, Marshall SD, Horberry T, Young K. Decision-centred design in healthcare: the process of identifying a decision support tool for airway management. Appl Ergon. 2019;77:70–82. https://doi.org/10.1016/j.apergo.2019.01.005.

    CAS  Article  PubMed  Google Scholar 

  40. Yu CH, Stacey D, Sale J, Hall S, Kaplan DM, Ivers N, et al. Designing and evaluating an interprofessional shared decision-making and goal-setting decision aid for patients with diabetes in clinical care - systematic decision aid development and study protocol. Implement Sci. 2014a;9:16. https://doi.org/10.1186/1748-5908-9-16.

    Article  PubMed  PubMed Central  Google Scholar 

  41. Yu CH, Parsons JA, Hall S, Newton D, Jovicic A, Lottridge D, et al. User-centered design of a web-based self-management site for individuals with type 2 diabetes - providing a sense of control and community. BMC Med Inform Decis Mak. 2014b;14:60. https://doi.org/10.1186/1472-6947-14-60.

    Article  PubMed  PubMed Central  Google Scholar 

  42. Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M, et al. Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ. 2008;337:a1655.

    PubMed  PubMed Central  Google Scholar 

  43. O’Cathain A, Croot L, Duncan E, Rousseau N, Sworn K, Turner KM, et al. Guidance on how to develop complex interventions to improve health and healthcare. BMJ Open. 2019;9:e029954.

    PubMed  PubMed Central  Google Scholar 

  44. Dowding D, Lichtner V, Closs SJ. Using the MRC framework for complex interventions to develop clinical decision support: a case study. Stud Health Technol Inform. 2017;235:544–8.

    PubMed  Google Scholar 

  45. Hoddinott P. A new era for intervention development studies. Pilot Feasibility Stud. 2015;1:36.

    PubMed  PubMed Central  Google Scholar 

  46. Kastner M, Straus SE. Application of the knowledge-to-action and Medical Research Council frameworks in the development of an osteoporosis clinical decision support tool. J Clin Epidemiol. 2012;65:1163–70.

    PubMed  Google Scholar 

  47. Wulff CN, Thygesen M, Søndergaard J, Vedsted P. Case management used to optimize cancer care pathways: a systematic review. BMC Health Serv Res. 2008;8:227.

    PubMed  PubMed Central  Google Scholar 

  48. Gentles SJ, Charles C, Nicholas DB, Ploeg J, McKibbon KA. Reviewing the research methods literature: principles and strategies illustrated by a systematic overview of sampling in qualitative research. Syst Rev. 2016;5:172.

    PubMed  PubMed Central  Google Scholar 

  49. Swaby L, Hind D, Shu P. The use of cognitive task analysis in clinical and health services research - a systematic review; 2019.

    Google Scholar 

  50. Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gotzsche PC, Ioannidis JPA, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration. BMJ. 2009. https://doi.org/10.1136/bmj.b2700.

  51. Hignett S. Environmental audit of UK hospitals to design safer facilities for frail and/or confused older people. Adv Hum Aspects Healthc. 2012. https://doi.org/10.1201/b12318-29.

  52. Kaufman DR, Starren JB. A methodological framework for evaluating mobile health devices. AMIA Annu. Symp. Proc; 2006. p. 978.

    Google Scholar 

  53. Jatobá A, de Carvalho PVR, da Cunha AM. A method for work modeling at complex systems: towards applying information systems in family health care units. Work. 2012. https://doi.org/10.3233/wor-2012-0626-3468.

  54. Crandall B, Klein GA, Hoffman RR. Working Minds; 2006. https://doi.org/10.7551/mitpress/7304.001.0001.

    Book  Google Scholar 

  55. Mbuagbaw L, Lawson DO, Puljak L, Allison DB, Thabane L. A tutorial on methodological studies: the what, when, how and why. BMC Med Res Methodol. 2020;20:226.

    PubMed  PubMed Central  Google Scholar 

  56. Johnson P, Johnson H, Waddington R, Shouls A. Task-related knowledge structures: analysis, modelling and application. In: BCS HCI. researchgate.net; 1988. p. 35–62.

    Google Scholar 

  57. Hylton C. Patient engagement – the PaCER model. Trials. 2015;16:1–1.

    Google Scholar 

  58. Demirel D. Dissertation Abstracts International: Section B: The Sciences and Engineering. 2019:80 issue 3-B(E).

  59. Salmon PM, Stanton NA, Walker GH, Jenkins DP. Distributed situation awareness: theory, measurement and application to teamwork: Ashgate; 2009.

    Google Scholar 

  60. Crandall B, Getchell-Reiter K. Critical decision method: a technique for eliciting concrete assessment indicators from the intuition of NICU nurses. ANS Adv Nurs Sci. 1993;16:42–51.

    CAS  PubMed  Google Scholar 

  61. Hou JK, Gasche C, Drazin NZ, Weaver SA, Ehrlich OG, Oberai R, et al. Assessment of gaps in care and the development of a care pathway for anemia in patients with inflammatory bowel diseases. Inflamm Bowel Dis. 2017;23:35–43.

    PubMed  Google Scholar 

  62. Fink A, Kosecoff J, Chassin M, Brook RH. Consensus methods: characteristics and guidelines for use. Am J Public Health. 1984;74:979–83.

    CAS  PubMed  PubMed Central  Google Scholar 

  63. Cioffi JM, Swain J, Arundell F. The decision to suture after childbirth: cues, related factors, knowledge and experience used by midwives. Midwifery. 2010;26:246–55.

    PubMed  Google Scholar 

  64. Sitterding MC, Ebright P, Broome M, Patterson ES, Wuchner S. Situation awareness and interruption handling during medication administration. West J Nurs Res. 2014;36:891–916.

    PubMed  Google Scholar 

  65. Cannon-Bowers J, Bowers C, Stout R, Ricci K, Hildabrand A. Using cognitive task analysis to develop simulation-based training for medical tasks. Mil Med. 2013;178:15–21.

    PubMed  Google Scholar 

  66. Madani A, Vassiliou MC, Watanabe Y, Al-Halabi B, Al-Rowais MS, Deckelbaum DL, et al. What are the principles that guide behaviors in the operating room?: Creating a framework to define and measure performance. Ann Surg. 2017;265:255–67.

    PubMed  Google Scholar 

  67. Jacklin R, Sevdalis N, Darzi A, Vincent C. Mapping surgical practice decision making: an interview study to evaluate decisions in surgical care. Am J Surg. 2008;195:689–96.

    PubMed  Google Scholar 

  68. Sullivan ME, Yates KA, Inaba K, Lam L, Clark RE. The use of cognitive task analysis to reveal the instructional limitations of experts in the teaching of procedural skills. Acad Med. 2014;89:811–6.

    PubMed  Google Scholar 

  69. Gazarian PK, Henneman EA, Chandler GE. Nurse decision making in the prearrest period. Clin Nurs Res. 2010;19:21–37.

    PubMed  Google Scholar 

  70. Craig C, Klein MI, Griswold J, Gaitonde K, McGill T, Halldorsson A. Using cognitive task analysis to identify critical decisions in the laparoscopic environment. Hum Factors. 2012;54:1025–39.

    PubMed  Google Scholar 

  71. Chan TM, Van Dewark K, Sherbino J, Schwartz A, Norman G, Lineberry M. Failure to flow: an exploration of learning and teaching in busy, multi-patient environments using an interpretive description method. Perspect Med Educ. 2017;6:380–7.

    PubMed  PubMed Central  Google Scholar 

  72. Varga E, Pattynama PMT, Freudenthal A. Manipulation of mental models of anatomy in interventional radiology and its consequences for design of human–computer interaction. Cognit Technol Work. 2013. https://doi.org/10.1007/s10111-012-0227-6.

  73. Chellali A, Schwaitzberg SD, Jones DB, Romanelli J, Miller A, Rattner D, et al. Toward scar-free surgery: an analysis of the increasing complexity from laparoscopic surgery to NOTES. Surg Endosc. 2014;28:3119–33.

    PubMed  PubMed Central  Google Scholar 

  74. Grunwald T, Clark D, Fisher SS, McLaughlin M, Narayanan S, Piepol D. Using cognitive task analysis to facilitate collaboration in development of simulator to accelerate surgical training. Stud Health Technol Inform. 2004;98:114–20.

    PubMed  Google Scholar 

  75. Harenčárová H. Managing uncertainty in paramedics’ decision making. J Cogn Eng Decis Mak. 2017. https://doi.org/10.1177/1555343416674814.

  76. Hashimoto DA, Gustaf Axelsson C, Jones CB, Phitayakorn R, Petrusa E, McKinley SK, et al. Surgical procedural map scoring for decision-making in laparoscopic cholecystectomy. Am J Surg. 2019. https://doi.org/10.1016/j.amjsurg.2018.11.011.

  77. Arora S, Sevdalis N, Nestel D, Woloshynowych M, Darzi A, Kneebone R. The impact of stress on surgical performance: a systematic review of the literature. Surgery. 2010;147(318–30):330.e1–6.

    Google Scholar 

  78. Frost DW, Cook DJ, Heyland DK, Fowler RA. Patient and healthcare professional factors influencing end-of-life decision-making during critical illness: a systematic review. Crit Care Med. 2011;39:1174–89.

    PubMed  Google Scholar 

  79. James FR, Power N, Laha S. Decision-making in intensive care medicine - a review. Pediatr Crit Care Med. 2018;19:247–58.

    Google Scholar 

  80. Kerckhoffs MC, Kant M, van Delden JJM, Hooft L, Kesecioglu J, van Dijk D. Selecting and evaluating decision-making strategies in the intensive care unit: a systematic review. J Crit Care. 2019;51:39–45.

    PubMed  Google Scholar 

  81. Lamb BW, Brown KF, Nagpal K, Vincent C, Green JSA, Sevdalis N. Quality of care management decisions by multidisciplinary cancer teams: a systematic review. Ann Surg Oncol. 2011;18:2116–25.

    PubMed  Google Scholar 

  82. Wilson A, Ronnekleiv-Kelly SM, Pawlik TM. Regret in surgical decision making: a systematic review of patient and physician perspectives. World J Surg. 2017. https://doi.org/10.1007/s00268-017-3895-9.

  83. Gazarian PK. Use of the critical decision method in nursing research: an integrative review. ANS Adv Nurs Sci. 2013;36:106–17.

    PubMed  Google Scholar 

  84. Beckles Z, Glover S, Ashe J, Stockton S, Boynton J, Lai R, et al. Searching CINAHL did not add value to clinical questions posed in NICE guidelines. J Clin Epidemiol. 2013;66:1051–7.

    PubMed  Google Scholar 

  85. Dalgetty R, Miller CB, Dombrowski SU. Examining the theory-effectiveness hypothesis: a systematic review of systematic reviews. Br J Health Psychol. 2019. https://doi.org/10.1111/bjhp.12356.

  86. Truijens F, Zühlke-van Hulzen L, Vanheule S. To manualize, or not to manualize: is that still the question? A systematic review of empirical evidence for manual superiority in psychological treatment. J Clin Psychol. 2018. https://doi.org/10.1002/jclp.22712.

  87. de Barra M, Scott C, Johnston M, De Bruin M, Scott N, Matheson C, et al. Do pharmacy intervention reports adequately describe their interventions? A template for intervention description and replication analysis of reports included in a systematic review. BMJ Open. 2019;9. https://doi.org/10.1136/bmjopen-2018-025511.

  88. Hoffmann TC, Glasziou PP, Boutron I, Milne R, Perera R, Moher D, et al. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ. 2014;348:g1687.

    PubMed  Google Scholar 

  89. Michie S, Fixsen D, Grimshaw JM, Eccles MP. Specifying and reporting complex behaviour change interventions: the need for a scientific method. Implement Sci. 2009;4:40.

    PubMed  PubMed Central  Google Scholar 

  90. Wears RL. Standardisation and its discontents. Cogn Technol Work. 2015;17:89–94.

    PubMed  Google Scholar 

  91. Kendall PC, Frank HE. Implementing evidence-based treatment protocols: flexibility within fidelity. Clin Psychol Sci Pract. 2018;25:e12271.

    Google Scholar 

  92. Hawe P, Shiell A, Riley T. Complex interventions: how “out of control” can a randomised controlled trial be? BMJ. 2004;328:1561–3.

    PubMed  PubMed Central  Google Scholar 

  93. May C, Finch T. Implementing, embedding, and integrating practices: an outline of normalization process theory. Sociology. 2009;43:535–54.

    Google Scholar 

  94. Rogers WA, Lotz M, Hutchison K, Pourmoslemi A, Eyers A. Identifying surgical innovation: a qualitative study of surgeons’ views. Ann Surg. 2014;259:273–8.

    PubMed  Google Scholar 

  95. Blencowe NS, Mills N, Cook JA, Donovan JL, Rogers CA, Whiting P, et al. Standardizing and monitoring the delivery of surgical interventions in randomized clinical trials. Br J Surg. 2016;103:1377–84.

    CAS  PubMed  PubMed Central  Google Scholar 

  96. Fairhurst K, Blazeby JM, Potter S, Gamble C, Rowlands C, Avery KNL. Value of surgical pilot and feasibility study protocols. Br J Surg. 2019;106:968–78.

    CAS  PubMed  PubMed Central  Google Scholar 

  97. Miller CJ, Wiltsey-Stirman S, Baumann AA. Iterative Decision-making for Evaluation of Adaptations (IDEA): a decision tree for balancing adaptation, fidelity, and intervention impact. J Community Psychol. 2020;48:1163–77.

    PubMed  PubMed Central  Google Scholar 

  98. Rousseau N, Turner KM, Duncan E, O’Cathain A, Croot L, Yardley L, et al. Attending to design when developing complex health interventions: a qualitative interview study with intervention developers and associated stakeholders. PLoS One. 2019;14:e0223615.

    CAS  PubMed  PubMed Central  Google Scholar 

  99. McLachlan S, Kyrimi E, Dube K, Hitman G, Simmonds J, Fenton N. Towards standardisation of evidence-based clinical care process specifications. Health Informatics J. 2020:2512–37. https://doi.org/10.1177/1460458220906069.

  100. Boehler ML, Roberts N, Sanfey H, Mellinger J. Do surgeons and gastroenterologists describe endoscopic retrograde cholangiopancreatography differently? A qualitative study. J Surg Educ. 2016;73:66–72.

    PubMed  Google Scholar 

  101. McCulloch P, Altman DG, Campbell WB, Flum DR, Glasziou P, Marshall JC, et al. No surgical innovation without evaluation: the IDEAL recommendations. Lancet. 2009;374:1105–12.

    PubMed  PubMed Central  Google Scholar 

  102. Edwards TC, Coombs AW, Szyszka B, Logishetty K, Cobb JP. 2021 Cognitive task analysis-based training in surgery: a meta-analysis. BJS Open. 2021;6. https://doi.org/10.1093/bjsopen/zrab122.

  103. Tofel-Grehl C, Feldon DF. Cognitive task analysis-based training: a meta-analysis of studies. J Cogn Eng Decis Mak. 2013;7:293–304.

    Google Scholar 

  104. Ahmad K, Bhattacharyya R, Gupte C. Using cognitive task analysis to train orthopaedic surgeons - is it time to think differently? A systematic review. Ann Med Surg (Lond). 2020;59:131–7.

    Google Scholar 

  105. Wingfield LR, Kulendran M, Chow A, Nehme J, Purkayastha S. Cognitive task analysis: bringing Olympic athlete style training to surgical education. Surg Innov. 2015;22:406–17.

    PubMed  Google Scholar 

  106. Lawson DO, Leenus A, Mbuagbaw L. Mapping the nomenclature, methodology, and reporting of studies that review methods: a pilot methodological review. Pilot Feasibility Stud. 2020;6:13. https://doi.org/10.1186/s40814-019-0544-0.

    Article  PubMed  PubMed Central  Google Scholar 

  107. Qian Y, Walters SJ, Jacques R, Flight L. Comprehensive review of statistical methods for analysing patient-reported outcomes (PROs) used as primary outcomes in randomised controlled trials (RCTs) published by the UK’s Health Technology Assessment (HTA) journal (1997–2020). BMJ Open. 2021;11:e051673. https://doi.org/10.1136/bmjopen-2021-051673.

    Article  PubMed  PubMed Central  Google Scholar 

  108. Weise A, Büchter R, Pieper D, Mathes T. Assessing context suitability (generalizability, external validity, applicability or transferability) of findings in evidence syntheses in healthcare—an integrative review of methodological guidance. Res Synth Methods. 2020;11:760–79. https://doi.org/10.1002/jrsm.1453.

    Article  PubMed  Google Scholar 

  109. Dunning D. The Dunning-Kruger effect: on being ignorant of one’s own ignorance. Adv Exp Soci Psychol. 2011;44:247–96. https://doi.org/10.1016/B978-0-12-385522-0.00005-6.

    Article  Google Scholar 

  110. Ottati V, Price ED, Wilson C, Sumaktoyo N. When self-perceptions of expertise increase closed-minded cognition: the earned dogmatism effect. J Exp Soc Psychol. 2015;61(2015):131–8. https://doi.org/10.1016/j.jesp.2015.08.003.

    Article  Google Scholar 

  111. Burgman M, Carr A, Godden L, Gregory R, McBride M, Flander L, et al. Redefining expertise and improving ecological judgement. J Soc Conserv Biolo5Conservation lett. 2011;4(2011):81–7.

    Google Scholar 

  112. Martini C. The epistemology of expertise. The Routledge Handbook of Social Epistemology. 1st ed; 2019. p. 8. ISBN: 9781315717937

    Google Scholar 

  113. Candia C, Uzzi B. Quantifying the selective forgetting and integration of ideas in science and technology. Am Psychol. 2021;76(6):1067–87. https://doi.org/10.1037/amp0000863.

    Article  PubMed  Google Scholar 

  114. Cole, Stephen. “Why sociology doesn’t make progress like the natural sciences.” Sociol Forum. 1994;9(2):133–54. Kluwer Academic Publishers-Plenum Publishers.

  115. Li VF. The failed institutionalization of “complexity science”: a focus on the Santa Fe Institute’s legitimization strategy. Hist Sci. 2021;59(3):344–69. https://doi.org/10.1177/0073275320938295.

  116. Zagaria A, Ando A, Zennaro A. Psychology: a giant with feet of clay. Integr Psychol. Behav Sci. 2020;54:521–62. https://doi.org/10.1007/s12124-020-09524-5.

    Article  Google Scholar 

  117. Cameron CA, Beemsterboer PL, Johnson LA, Mislevy RJ, Steinberg LS, Breyer FJ. A cognitive task analysis for dental hygiene. J Dent Educ. 2000;64:333–51.

    CAS  PubMed  Google Scholar 

  118. Dominguez CO, Hutton RJB, Flach JM, McKellar DP. Perception-action coupling in endoscopie surgery: a cognitive-task analysis approach. Studies in Perception and Action III. 2019. https://doi.org/10.4324/9781315789361-74.

  119. Funk KH 2nd, Bauer JD, Doolen TL, Telasha D, Nicolalde RJ, Reeber M, et al. Use of modeling to identify vulnerabilities to human error in laparoscopy. J Minim Invasive Gynecol. 2010;17:311–20.

    PubMed  Google Scholar 

  120. Yagahara A, Sato H, Yokooka Y, Tsuji S, Kurowarabi K, Ogasawara K. Proposal for bottom-up hierarchical task analysis: application to the mammography examination process. J Med Imaging Health Informatics. 2015. https://doi.org/10.1166/jmihi.2015.1548.

  121. Mauksch S, von der Gracht HA, Gordon TJ. Who is an expert for foresight? A review of identification methods. Technol Forecast Soc Change. 2020;154:119982.

    Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

No specific funding supported the conduct of this work.

Author information

Authors and Affiliations

Authors

Contributions

LS and DH created the search strategy and carried out preliminary searches; PS carried out the review searches. LS and PS assessed the eligibility of results and completed data extraction; DH and KS resolved any disputes. LS and KS synthesised results, and DH resolved any disputes. LS, DH and KS contributed to writing the manuscript. The authors read and approved the final manuscript.

Corresponding author

Correspondence to Lizzie Swaby.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

All authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1: Supplementary Table 1.

Reasons for exclusion of articles at full text.

Additional file 2: Supplementary Table 2.

Data extraction for eligible studies.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Swaby, L., Shu, P., Hind, D. et al. The use of cognitive task analysis in clinical and health services research — a systematic review. Pilot Feasibility Stud 8, 57 (2022). https://doi.org/10.1186/s40814-022-01002-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40814-022-01002-6

Keywords

  • Cognitive task analysis
  • Medical decision-making
  • Clinical decision-making