Purpose: Temporal bone surgery requires excellent surgical skills and simulation-based training can aid novices’ skills acquisition. However, simulation-based training is challenged by early stagnation of the learning after relatively few performances. Structured self-assessment during practice might enhance learning by inducing reflection and engagement in the learning task. In this study, structured self-assessment was introduced during virtual reality (VR) simulation of mastoidectomy to investigate the effects on subsequent performance during cadaveric dissection.
Methods: This was a prospective educational study with comparison with historical controls (reference cohort). At a temporal bone dissection course, eighteen participants performed structured self-assessment during three hours of VR simulation training of mastoidectomy before proceeding to cadaver dissection/surgery (intervention cohort). At a previous course, eighteen participants received similar VR simulation training but without the structured self-assessment (reference cohort). Final products from VR simulation and cadaveric dissection were video-recorded and assessed by two blinded raters using a 19-point modified Welling Scale.
Results: The intervention cohort completed fewer procedures (average 4.2) during VR simulation training than the reference cohort (average 5.7). Nevertheless, the intervention cohort achieved a significantly higher average dissection score both in VR simulation (11.1 points, 95% CI [10.6–11.5]) and subsequent cadaveric dissection (11.8 points, 95% CI [10.7–12.8]) compared with the reference cohort who scored 9.1 points (95% CI [8.7–9.5]) during VR simulation and 5.8 points (95% CI [4.8–6.8]) during cadaveric dissection.
Conclusion: Structured self-assessment is a valuable learning support during self-directed VR simulation training of mastoidectomy and the positive effect on performance transfers to subsequent cadaveric dissection performance.
Objective: 3D-printed models hold great potential for temporal bone surgical training as a supplement to cadaveric dissection. Nevertheless, critical knowledge on manufacturing remains scattered, and little is known about whether use of these models improves surgical performance. This systematic review aims to explore (1) methods used for manufacturing and (2) how educational evidence supports using 3D-printed temporal bone models.
Data sources: PubMed, Embase, the Cochrane Library, and Web of Science.
Review methods: Following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, relevant studies were identified and data on manufacturing and validation and/or training extracted by 2 reviewers. Quality assessment was performed using the Medical Education Research Study Quality Instrument tool; educational outcomes were determined according to Kirkpatrick’s model.
Results: The search yielded 595 studies; 36 studies were found eligible and included for analysis. The described 3D-printed models were based on computed tomography scans from patients or cadavers. Processing included manual segmentation of key structures such as the facial nerve; postprocessing, for example, consisted of removal of print material inside the model. Overall, educational quality was low, and most studies evaluated their models using only expert and/or trainee opinion (ie, Kirkpatrick level 1). Most studies reported positive attitudes toward the models and their potential for training.
Conclusion: Manufacturing and use of 3D-printed temporal bones for surgical training are widely reported in the literature. However, evidence to support their use and knowledge about both manufacturing and the effects on subsequent surgical performance are currently lacking. Therefore, stronger educational evidence and manufacturing knowhow are needed for widespread implementation of 3D-printed temporal bones in surgical curricula.
PURPOSE: E-learning provides a flexible and effective approach to learning and is increasingly used in otorhinolaryngology (ORL). We developed a national theoretical e-learning course for ORL trainees and describe our experiences with implementation as well as piloting the e-learning course.
METHODS: E-learning course content was developed as structured multiple-choice quizzes for the European core curriculum textbook. An open source learning management system was adapted for a self-paced e-learning course. We piloted the e-learning course as a non-mandatory option for the 15 residents participating in the Danish four-day national training course in rhinology in February 2019. A post-course survey was sent out to the participants and used in the evaluation along with activity data from the learning management system.
RESULTS: Fourteen out of 15 trainees participated in the e-learning course. Nine participants completed >95 % of the course. The activity data demonstrated that participants with the highest completion rate typically began well in advance of the course (>2 months). Overall the e-learning course was rated positively in relation to learning and as preparation for the national training course. Participants responded that the level of the e-learning course was higher than and also at times in slight incongruity with the content of the national curriculum. Participants would like protected study time for e-learning activities in their residency program. All participants responded that they would use e-learning in relation to future national training courses.
CONCLUSIONS: Developing a national e-learning course is feasible and is well-received by trainees as well as other educational stakeholders.
A variety of structured assessment tools for use in surgical training have been reported, but extant assessment tools often employ paper-based rating forms. Digital assessment forms for evaluating surgical skills could potentially offer advantages over paper-based forms, especially in complex assessment situations. In this paper, we report on the development of cross-platform digital assessment forms for use with multiple raters in order to facilitate the automatic processing of surgical skills assessments that include structured ratings. The FileMaker 13 platform was used to create a database containing the digital assessment forms, because this software has cross-platform functionality on both desktop computers and handheld devices. The database is hosted online, and the rating forms can therefore also be accessed through most modern web browsers. Cross-platform digital assessment forms were developed for the rating of surgical skills. The database platform used in this study was reasonably priced, intuitive for the user, and flexible. The forms have been provided online as free downloads that may serve as the basis for further development or as inspiration for future efforts. In conclusion, digital assessment forms can be used for the structured rating of surgical skills and have the potential to be especially useful in complex assessment situations with multiple raters, repeated assessments in various times and locations, and situations requiring substantial subsequent data processing or complex score calculations.