Introduction: This study aimed to conduct a targeted needs assessment to identify and prioritise technical skills and procedures suited for simulation-based training (SBT) in private otorhinolaryngology (ORL) practice in Denmark, including mapping the learning environment related to implementation of SBT.
Methods: A panel of trainers and trainees in private ORL practice was recruited. Using the Delphi method, three rounds of surveys were conducted. Round one consisted of a survey of the learning environment and a brainstorming phase. Round two quantified the frequency of procedures, ranked the importance of procedural competency, impact on patient safety and feasibility for SBT. In round three, panelists eliminated and ranked procedures for final prioritisation.
Results: A total of 26 of 57 invited trainers and trainees accepted participation. The educational environment was described and 136 skills were suggested in the brainstorming phase. “Non-technical” skills were removed, and the remaining 46 technical skills were grouped for appraisal in round two. In round three, panelists reduced these to eight technical skills and procedures which were maintained for final prioritisation for SBT with myringotomy with ventilation tube insertion ranking highest. Trainees and trainers indicated that close supervision and dedicated time for training were major strengths of the learning environment.
Conclusions: Our findings extend the results obtained in a previous general needs assessment and may inform curricular implementation of SBT in private ORL practice. A structured “package” with SBT and assessment for the identified procedures are desired by trainers. This work is already in progress and implementation is facilitated by a positive attitude towards SBT among trainers and trainees alike.
Objective: Otoscopy is a frequently performed procedure and competency in this skill is important across many specialties. We aim to systematically review current medical educational evidence for training of handheld otoscopy skills.
Methods: Following the PRISMA guideline, studies reporting on training and/or assessment of handheld otoscopy were identified searching the following databases: PubMed, Embase, OVID, the Cochrane Library, PloS Medicine, Directory of Open Access Journal (DOAJ), and Web of Science. Two reviewers extracted data on study design, training intervention, educational outcomes, and results. Quality of educational evidence was assessed along with classification according to Kirkpatrick’s model of educational outcomes.
Results: The searches yielded a total of 6064 studies with a final inclusion of 33 studies for the qualitative synthesis. Handheld otoscopy training could be divided into workshops, physical simulators, web-based training/e-learning, and smartphone-enabled otoscopy. Workshops were the most commonly described educational intervention and typically consisted of lectures, hands-on demonstrations, and training on peers. Almost all studies reported a favorable effect on either learner attitude, knowledge, or skills. The educational quality of the studies was reasonable but the educational outcomes were mostly evaluated on the lower Kirkpatrick levels with only a single study determining the effects of training on actual change in the learner behavior.
Conclusion: Overall, it seems that any systematic approach to training of handheld otoscopy is beneficial in training regardless of learner level, but the heterogeneity of the studies makes comparisons between studies difficult and the relative effect sizes of the interventions could not be determined.
Objective: Myringotomy and ventilation tube insertion (MT) is a key procedure in otorhinolaryngology and can be trained using simulation models. We aimed to systematically review the literature on models for simulation-based training and assessment of MT and supporting educational evidence.
Databases reviewed: PubMed, Embase, Cochrane Library, Web of Science, Directory of Open Access Journals.
Methods: Inclusion criteria were MT training and/or skills assessment using all types of training modalities and learners. Studies were divided into 1) descriptive and 2) educational interventional/observational in the analysis. For descriptive studies, we provide an overview of available models including materials and cost. Educational studies were appraised using Kirkpatrick’s level of educational outcomes, Messick’s framework of validity, and a structured quality assessment tool.
Results: Forty-six studies were included consisting of 21 descriptive studies and 25 educational studies. Thirty-one unique physical and three virtual reality simulation models were identified. The studies report moderate to high realism of the different simulators and trainees and educators perceive them beneficial in training MT skills. Overall, simulation-based training is found to reduce procedure time and errors, and increase performance as measured using different assessment tools. None of the studies used a contemporary validity framework and the current educational evidence is limited.
Conclusion: Numerous simulation models and assessment tools have been described in the literature but educational evidence and systematic implementation into training curricula is scarce. There is especially a need to establish the effect of simulation-based training of MT in transfer to the operating room and on patient outcomes.
BACKGROUND: Cochlear implantation requires excellent surgical skills; virtual reality simulation training is an effective method for acquiring basic competency in temporal bone surgery before progression to cadaver dissection. However, cochlear implantation virtual reality simulation training remains largely unexplored and only one simulator currently supports the training of the cochlear implantation electrode insertion. Here, we aim to evaluate the effect of cochlear implantation virtual reality simulation training on subsequent cadaver dissection performance and self-directedness.
METHODS: This was a randomized, controlled trial. Eighteen otolaryngology residents were randomized to either mastoidectomy including cochlear implantation virtual reality simulation training (intervention) or mastoidectomy virtual reality simulation training alone (controls) before cadaver cochlear implantation surgery. Surgical performance was evaluated by two blinded expert raters using a validated, structured assess- ment tool. The need for supervision (reflecting self-directedness) was assessed via post-dissection questionnaires.
RESULTS: The intervention group achieved a mean score of 22.9 points of a maximum of 44 points, which was 5.4% higher than the control group’s 21.8 points (P = .51). On average, the intervention group required assistance 1.3 times during cadaver drilling; this was 41% more frequent in the control group who received assistance 1.9 times (P = .21).
CONCLUSION: Cochlear implantation virtual reality simulation training is feasible in the context of a cadaver dissection course. The addition of cochlear implantation virtual reality training to basic mastoidectomy virtual reality simulation training did not lead to a significant improvement of performance or self-directedness in this study. Our findings suggest that learning an advanced temporal bone procedure such as cochlear implantation surgery requires much more training than learning mastoidectomy.
Objective: Mastering Cochlear Implant (CI) surgery requires repeated practice, preferably initiated in a safe – i.e. simulated – environment. Mastoidectomy Virtual Reality (VR) simulation-based training (SBT) is effective, but SBT of CI surgery largely uninvestigated. The learning curve is imperative for understanding surgical skills acquisition and developing competency-based training. Here, we explore learning curves in VR SBT of CI surgery and transfer of skills to a 3D-printed model.
Methods: Prospective, single-arm trial. Twenty-four novice medical students completed a pre-training CI inserting test on a commercially available pre-drilled 3D-printed temporal bone. A training program of 18 VR simulation CI procedures was completed in the Visual Ear Simulator over four sessions. Finally, a post-training test similar to the pre-training test was completed. Two blinded experts rated performances using the validated Cochlear Implant Surgery Assessment Tool (CISAT). Performance scores were analyzed using linear mixed models.
Results: Learning curves were highly individual with primary performance improvement initially, and small but steady improvements throughout the 18 procedures. CI VR simulation performance improved 33% (p < 0.001). Insertion performance on a 3D-printed temporal bone improved 21% (p < 0.001), demonstrating skills transfer.
Discussion: VR SBT of CI surgery improves novices’ performance. It is useful for introducing the procedure and acquiring basic skills. CI surgery training should pivot on objective performance assessment for reaching pre-defined competency before cadaver – or real-life surgery. Simulation-based training provides a structured and safe learning environment for initial training.
Conclusion: CI surgery skills improve from VR SBT, which can be used to learn the fundamentals of CI surgery.
Objective: 3D-printed models hold great potential for temporal bone surgical training as a supplement to cadaveric dissection. Nevertheless, critical knowledge on manufacturing remains scattered, and little is known about whether use of these models improves surgical performance. This systematic review aims to explore (1) methods used for manufacturing and (2) how educational evidence supports using 3D-printed temporal bone models.
Data sources: PubMed, Embase, the Cochrane Library, and Web of Science.
Review methods: Following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, relevant studies were identified and data on manufacturing and validation and/or training extracted by 2 reviewers. Quality assessment was performed using the Medical Education Research Study Quality Instrument tool; educational outcomes were determined according to Kirkpatrick’s model.
Results: The search yielded 595 studies; 36 studies were found eligible and included for analysis. The described 3D-printed models were based on computed tomography scans from patients or cadavers. Processing included manual segmentation of key structures such as the facial nerve; postprocessing, for example, consisted of removal of print material inside the model. Overall, educational quality was low, and most studies evaluated their models using only expert and/or trainee opinion (ie, Kirkpatrick level 1). Most studies reported positive attitudes toward the models and their potential for training.
Conclusion: Manufacturing and use of 3D-printed temporal bones for surgical training are widely reported in the literature. However, evidence to support their use and knowledge about both manufacturing and the effects on subsequent surgical performance are currently lacking. Therefore, stronger educational evidence and manufacturing knowhow are needed for widespread implementation of 3D-printed temporal bones in surgical curricula.
Purpose: Competency-based education relies on the validity and reliability of assessment scores. Generalizability (G) theory is well suited to explore the reliability of assessment tools in medical education but has only been applied to a limited extent. This study aimed to systematically review the literature using G-theory to explore the reliability of structured assessment of medical and surgical technical skills and to assess the relative contributions of different factors to variance.
Method: In June 2020, 11 databases, including PubMed, were searched from inception through May 31, 2020. Eligible studies included the use of G-theory to explore reliability in the context of assessment of medical and surgical technical skills. Descriptive information on study, assessment context, assessment protocol, participants being assessed, and G-analyses were extracted. Data were used to map G-theory and explore variance components analyses. A meta-analyses was conducted to synthesize the extracted data on the sources of variance and reliability.
Results: Forty-four studies were included; of these, 39 had sufficient data for meta-analysis. The total pool included 35,284 unique assessments of 31,496 unique performances of 4,154 participants. Person variance had a pooled effect of 44.2% (95% confidence interval [CI] [36.8%-51.5%]). Only assessment tool type (Objective Structured Assessment of Technical Skills-type vs task-based checklist-type) had a significant effect on person variance. The pooled reliability (G-coefficient) was .65 (95% CI [.59-.70]). Most studies included D-studies (39, 89%) and generally seemed to have higher ratios of performances to assessors to achieve a sufficiently reliable assessment.
Conclusions: G-theory is increasingly being used to examine reliability of technical skills assessment in medical education but more rigor in reporting is warranted. Contextual factors can potentially affect variance components and thereby reliability estimates and should be considered, especially in high-stakes assessment. Reliability analysis should be a best practice when developing assessment of technical skills.
OBJECTIVE: Handheld otoscopy requires both technical and diagnostic skills, and is often reported to be insufficient after medical training. We aimed to develop and gather validity evidence for an assessment tool for handheld otoscopy using contemporary medical educational standards.
STUDY DESIGN: Educational study.
SETTING: University/teaching hospital.
SUBJECTS AND METHODS: A structured Delphi methodology was used to develop the assessment tool: nine key opinion leaders (otologists) in undergraduate training of otoscopy iteratively achieved consensus on the content. Next, validity evidence was gathered by the video-taped assessment of two handheld otoscopy performances of 15 medical students (novices) and 11 specialists in otorhinolaryngology using two raters. Standard setting (pass/fail criteria) was explored using the contrasting groups and Angoff methods.
RESULTS: The developed Copenhagen Assessment Tool of Handheld Otoscopy Skills (CATHOS) consists 10 items rated using a 5-point Likert scale with descriptive anchors. Validity evidence was collected and structured according to Messick’s framework: for example the CATHOS had excellent discriminative validity (mean difference in performance between novices and experts 20.4 out of 50 points, p<0.001); and high internal consistency (Cronbach’s alpha=0.94). Finally, a pass/fail score was established at 30 points for medical students and 42 points for specialists in ORL.
CONCLUSION: We have developed and gathered validity evidence for an assessment tool of technical skills of handheld otoscopy and set standards of performance. Standardized assessment allows for individualized learning to the level of proficiency and could be implemented in under- and postgraduate handheld otoscopy training curricula, and is also useful in evaluating training interventions.