OBJECTIVE: Handheld otoscopy requires both technical and diagnostic skills, and is often reported to be insufficient after medical training. We aimed to develop and gather validity evidence for an assessment tool for handheld otoscopy using contemporary medical educational standards.
STUDY DESIGN: Educational study.
SETTING: University/teaching hospital.
SUBJECTS AND METHODS: A structured Delphi methodology was used to develop the assessment tool: nine key opinion leaders (otologists) in undergraduate training of otoscopy iteratively achieved consensus on the content. Next, validity evidence was gathered by the video-taped assessment of two handheld otoscopy performances of 15 medical students (novices) and 11 specialists in otorhinolaryngology using two raters. Standard setting (pass/fail criteria) was explored using the contrasting groups and Angoff methods.
RESULTS: The developed Copenhagen Assessment Tool of Handheld Otoscopy Skills (CATHOS) consists 10 items rated using a 5-point Likert scale with descriptive anchors. Validity evidence was collected and structured according to Messick’s framework: for example the CATHOS had excellent discriminative validity (mean difference in performance between novices and experts 20.4 out of 50 points, p<0.001); and high internal consistency (Cronbach’s alpha=0.94). Finally, a pass/fail score was established at 30 points for medical students and 42 points for specialists in ORL.
CONCLUSION: We have developed and gathered validity evidence for an assessment tool of technical skills of handheld otoscopy and set standards of performance. Standardized assessment allows for individualized learning to the level of proficiency and could be implemented in under- and postgraduate handheld otoscopy training curricula, and is also useful in evaluating training interventions.
Purpose: At graduation from medical school, competency in otoscopy is often insufficient. Simulation-based training can be used to improve technical skills, but the suitability of the training model and assessment must be supported by validity evidence. The purpose of this study was to collect content validity evidence for a simulation-based test of handheld otoscopy skills.
Methods: First, a three-round Delphi study was conducted with a panel of nine clinical teachers in otorhinolaryngology (ORL) to determine the content requirements in our educational context. Next, the authenticity of relevant cases in a commercially available technology-enhanced simulator (Earsi, VR Magic, Germany) was evaluated by specialists in ORL. Finally, an integrated course was developed for the simulator based on these results.
Results: The Delphi study resulted in nine essential diagnoses of normal variations and pathologies that all junior doctors should be able to diagnose with a handheld otoscope. Twelve out of 15 tested simulator cases were correctly recognized by at least one ORL specialist. Fifteen cases from the simulator case library matched the essential diagnoses determined by the Delphi study and were integrated into the course.
Conclusion: Content validity evidence for a simulation-based test of handheld otoscopy skills was collected. This informed a simulation-based course that can be used for undergraduate training. The course needs to be further investigated in relation to other aspects of validity and for future self-directed training.