Purpose: 3D-printing (three-dimensional printing) is an emerging technology with promising applications for patient-specific interventions. Nonetheless, knowledge on the clinical applicability of 3D-printing in otology and research on its use remains scattered. Understanding these new treatment options is a prerequisite for clinical implementation, which could improve patient outcomes. This review aims to explore current applications of 3D-printed patient-specific otologic interventions, including state of the evidence, strengths, limitations, and future possibilities.
Methods: Following the PRISMA statement, relevant studies were identified through Pubmed, EMBASE, the Cochrane Library, and Web of Science. Data on the manufacturing process and interventions were extracted by two reviewers. Study quality was assessed using Joanna Briggs Institute’s critical appraisal tools.
Results: Screening yielded 590 studies; 63 were found eligible and included for analysis. 3D-printed models were used as guides, templates, implants, and devices. Outer ear interventions comprised 73% of the studies. Overall, optimistic sentiments on 3D-printed models were reported, including increased surgical precision/confidence, faster manufacturing/operation time, and reduced costs/complications. Nevertheless, study quality was low as most studies failed to use relevant objective outcomes, compare new interventions with conventional treatment, and sufficiently describe manufacturing.
Conclusion: Several clinical interventions using patient-specific 3D-printing in otology are considered promising. However, it remains unclear whether these interventions actually improve patient outcomes due to lack of comparison with conventional methods and low levels of evidence. Further, the reproducibility of the 3D-printed interventions is compromised by insufficient reporting. Future efforts should focus on objective, comparative outcomes evaluated in large-scale studies.
Keywords: 3D-printing; Additive manufacturing; Ear surgery; Otology; Patient-specific.74
Background: It is necessary to train a large number of healthcare workers (HCW) within a limited time to ensure adequate human resources during an epidemic. There remains an urgent need for best practices on development and implementation of training programmes.
Objective: To explore published literature in relation to training and education for viral epidemics as well as the effect of these interventions to inform training of HCW.
Data sources: Systematic searches in five databases performed between 1 January 2000 and 24 April 2020 for studies reporting on educational interventions in response to major viral epidemics.
Study eligibility criteria: All studies on educational interventions developed, implemented and evaluated in response to major global viral outbreaks from 2000 to 2020.
Participants: Healthcare workers.
Interventions: Educational or training interventions.
Study appraisal and synthesis methods: Descriptive information were extracted and synthesised according to content, competency category, educational methodology, educational effects and level of educational outcome. Quality appraisal was performed using a criterion-based checklist.
Results: A total of 15 676 records were identified and 46 studies were included. Most studies were motivated by the Ebola virus outbreak with doctors and nurses as primary learners. Traditional didactic methods were commonly used to teach theoretical knowledge. Simulation-based training was used mainly for training of technical skills, such as donning and doffing of personal protective equipment. Evaluation of the interventions consisted mostly of surveys on learner satisfaction and confidence or tests of knowledge and skills. Only three studies investigated transfer to the clinical setting or effect on patient outcomes.
Conclusions and implications of findings: The included studies describe important educational experiences from past epidemics with a variety of educational content, design and modes of delivery. High-level educational evidence is limited. Evidence-based and standardised training programmes that are easily adapted locally are recommended in preparation for future outbreaks.
Objective: This systematic review aims to examine the use of standard-setting methods in the context of simulation-based training of surgical procedures.
Summary of background: Simulation-based training is increasingly used in surgical education. However, it is important to determine which level of competency trainees must reach during simulation-based training before operating on patients. Therefore, pass/fail standards must be established using systematic, transparent, and valid methods.
Methods: Systematic literature search was done in four databases (Ovid MEDLINE, Embase, Web of Science, and Cochrane Library). Original studies investigating simulation-based assessment of surgical procedures with application of a standard setting were included. Quality of evidence was appraised using GRADE.
Results: Of 24,299 studies identified by searches, 232 studies met the inclusion criteria. Publications using already established standard settings were excluded (N = 70), resulting in 162 original studies included in the final analyses. Most studies described how the standard setting was determined (N = 147, 91%) and most used the mean or median performance score of experienced surgeons (n = 65, 40%) for standard setting. We found considerable differences across most of the studies regarding study design, set-up, and expert level classification. The studies were appraised as having low and moderate evidence.
Conclusion: Surgical education is shifting towards competency-based education, and simulation-based training is increasingly used for acquiring skills and assessment. Most studies consider and describe how standard settings are established using more or less structured methods but for current and future educational programs, a critical approach is needed so that the learners receive a fair, valid and reliable assessment.
Objective: Otoscopy is a frequently performed procedure and competency in this skill is important across many specialties. We aim to systematically review current medical educational evidence for training of handheld otoscopy skills.
Methods: Following the PRISMA guideline, studies reporting on training and/or assessment of handheld otoscopy were identified searching the following databases: PubMed, Embase, OVID, the Cochrane Library, PloS Medicine, Directory of Open Access Journal (DOAJ), and Web of Science. Two reviewers extracted data on study design, training intervention, educational outcomes, and results. Quality of educational evidence was assessed along with classification according to Kirkpatrick’s model of educational outcomes.
Results: The searches yielded a total of 6064 studies with a final inclusion of 33 studies for the qualitative synthesis. Handheld otoscopy training could be divided into workshops, physical simulators, web-based training/e-learning, and smartphone-enabled otoscopy. Workshops were the most commonly described educational intervention and typically consisted of lectures, hands-on demonstrations, and training on peers. Almost all studies reported a favorable effect on either learner attitude, knowledge, or skills. The educational quality of the studies was reasonable but the educational outcomes were mostly evaluated on the lower Kirkpatrick levels with only a single study determining the effects of training on actual change in the learner behavior.
Conclusion: Overall, it seems that any systematic approach to training of handheld otoscopy is beneficial in training regardless of learner level, but the heterogeneity of the studies makes comparisons between studies difficult and the relative effect sizes of the interventions could not be determined.
Objective: 3D-printed models hold great potential for temporal bone surgical training as a supplement to cadaveric dissection. Nevertheless, critical knowledge on manufacturing remains scattered, and little is known about whether use of these models improves surgical performance. This systematic review aims to explore (1) methods used for manufacturing and (2) how educational evidence supports using 3D-printed temporal bone models.
Data sources: PubMed, Embase, the Cochrane Library, and Web of Science.
Review methods: Following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, relevant studies were identified and data on manufacturing and validation and/or training extracted by 2 reviewers. Quality assessment was performed using the Medical Education Research Study Quality Instrument tool; educational outcomes were determined according to Kirkpatrick’s model.
Results: The search yielded 595 studies; 36 studies were found eligible and included for analysis. The described 3D-printed models were based on computed tomography scans from patients or cadavers. Processing included manual segmentation of key structures such as the facial nerve; postprocessing, for example, consisted of removal of print material inside the model. Overall, educational quality was low, and most studies evaluated their models using only expert and/or trainee opinion (ie, Kirkpatrick level 1). Most studies reported positive attitudes toward the models and their potential for training.
Conclusion: Manufacturing and use of 3D-printed temporal bones for surgical training are widely reported in the literature. However, evidence to support their use and knowledge about both manufacturing and the effects on subsequent surgical performance are currently lacking. Therefore, stronger educational evidence and manufacturing knowhow are needed for widespread implementation of 3D-printed temporal bones in surgical curricula.
Purpose: Competency-based education relies on the validity and reliability of assessment scores. Generalizability (G) theory is well suited to explore the reliability of assessment tools in medical education but has only been applied to a limited extent. This study aimed to systematically review the literature using G-theory to explore the reliability of structured assessment of medical and surgical technical skills and to assess the relative contributions of different factors to variance.
Method: In June 2020, 11 databases, including PubMed, were searched from inception through May 31, 2020. Eligible studies included the use of G-theory to explore reliability in the context of assessment of medical and surgical technical skills. Descriptive information on study, assessment context, assessment protocol, participants being assessed, and G-analyses were extracted. Data were used to map G-theory and explore variance components analyses. A meta-analyses was conducted to synthesize the extracted data on the sources of variance and reliability.
Results: Forty-four studies were included; of these, 39 had sufficient data for meta-analysis. The total pool included 35,284 unique assessments of 31,496 unique performances of 4,154 participants. Person variance had a pooled effect of 44.2% (95% confidence interval [CI] [36.8%-51.5%]). Only assessment tool type (Objective Structured Assessment of Technical Skills-type vs task-based checklist-type) had a significant effect on person variance. The pooled reliability (G-coefficient) was .65 (95% CI [.59-.70]). Most studies included D-studies (39, 89%) and generally seemed to have higher ratios of performances to assessors to achieve a sufficiently reliable assessment.
Conclusions: G-theory is increasingly being used to examine reliability of technical skills assessment in medical education but more rigor in reporting is warranted. Contextual factors can potentially affect variance components and thereby reliability estimates and should be considered, especially in high-stakes assessment. Reliability analysis should be a best practice when developing assessment of technical skills.