A simple eye model for objectively assessing the competency of direct ophthalmoscopy | Eye

2021-12-24 10:02:52 By : Mr. Yingchun Luan

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Direct ophthalmoscopy is an important investigative technology not only for ophthalmologists, but also for general practitioners and other specialists. The purpose of this study was to develop a simple and robust eye model for effective and objective assessment of ophthalmoscopic competency.

A series of eye models were assembled using commonly available materials, including 26-mm-diameter double-hemispherical brown plastic balls and convex lenses. A 6-mm circular opening was drilled on one hemisphere as a pupil behind which the lens was glued to provide the refractive component. Ten pieces of letters were placed on the inner surface of the other hemisphere. Ophthalmoscopic skills of ophthalmologist residents were first subjectively assessed using a checklist by two tutors and then objectively by using the eye models. The discrimination index was calculated to evaluate the effectiveness of assessment. Finally, a feedback questionnaire was completed.

Totally 76 residents were recruited. The checklist score was 9.25 ± 0.47, with a discrimination index of 0.11. The model-assessment score was 4.24 ± 3.10, with a discrimination index of 0.79. There was no correlation between the checklist score and model scores (r = 0.133, P = 0.251). Two-thirds of the participants agreed or strongly agreed that model-assessment could reflect the ability to visualize the fundus.

We have developed simple eye models to assess the competency of ophthalmoscopy with excellent discriminatory power to differentiate competence levels of ophthalmology residents.

In clinical practice, visualization of the fundus is important for detection of many eye diseases and useful also for systemic diseases, such as cardiovascular diseases, endocrine diseases and neurological disorders [1]. Direct ophthalmoscopy for fundus examination is an essential skill for not only ophthalmologists, but also other clinical specialists and even general practitioners. It is noted that mastering the technique can be challenging and requires training and practice [1,2,3,4]. However, in medical education, it is not easy to incorporate such specific training in the curriculum or to objective assess the required operation skills [5].

In conventional training programs of tertiary eye centers, trainers assess the ophthalmoscopic skills of trainees, mostly residents, by using a checklist on the operations, such as manipulating the light beam and adjusting the diopter [6]. However, this checklist-based assessment is subjective and may not reliably reflect the competency of ophthalmoscopic skills since it is essentially based on memorization of the operational procedure rather than ability of visualization of the fundus [1]. An alternative assessment is to use model eyes with built-in fundus images, such as head mannequins [2, 3, 7, 8]. During the assessment, the resident trainees used a direct ophthalmoscope to examine the eye model and gave a diagnosis at a teaching hospital. The score was based on the accuracy of the diagnosis. However, such eye models set-up can be expensive. Furthermore, such assessment requires not only competency in visualizing the fundus but also knowledge in the diagnosis of retinal diseases. The trainees can be competent in ophthalmoscopic skills but inadequate in knowledge of pathologic features of the retina in relation to retinal diseases [5]. Visualization of the fundus is important in detecting fundus diseases [9], but there have been few methods to assess the skill of visualization.

Previously, Bradley reported a simple eye model, made from table tennis ball with pieces of text on the inner face, to assess the ophthalmoscopic skills of medical students [5]. It was an appealing design but there was no attempt to replicate the refractive error of a typical eyeball, and reading text allowed some guesswork or filling in, as opposed to reading isolated letters. Its discriminatory power as an assessment tool has to be improved in order to enhance its practical usefulness.

This paper described development of a simple eye model modified from Bradley’s that enabled the convenient and objective assessment of ophthalmoscopic competency, and evaluated its effectiveness of assessment in comparison with the traditional checklist that currently used.

We used some readily available materials to assemble a series of simple and inexpensive eye models (Fig. 1A). The 26-mm-diameter brown plastic balls were obtained from Taobao, China. (https://item.taobao.com/item.htm?spm=a230r.1.14.26.50515715LVXaKp&id=521478860398&ns=1&abbucket=4#detail). They were double-hemispherical in shape, one hemisphere simulating the anterior segment of the eyeball and the other the posterior segment. For the anterior hemisphere, we drilled a 6-mm circular opening to provide the pupil. A thin Poly(methyl methacrylate) (PMMA) convex lens with a focal length of 26 mm, also from Taobao, China (https://item.taobao.com/item.htm?spm=a230r.1.14.40.5f9e3b035jqp1L&id=562289788402&ns=1&abbucket=4#detail), was glued to the inner surface behind the pupil to provide the refractive component. For the posterior hemisphere, we painted internally an optic disk and four sets of vessels radiating from the disc to the four quadrants. Subsequently, 10 pieces of paper with randomized capital letters (black color, 4-point Calibri), one letter printed on one piece, were placed on the inner surface, marking the macula, disc, superior, superonasal, nasal, inferonasal, inferior, inferotemporal, temporary and superotemporal mid-peripheral area. Finally, the two hemispheres were buckled together, and an “S” symbol was drawn externally to mark the 12 o’clock of the model.

A The anterior hemisphere with a 6-mm hole and a convex lens and the posterior hemisphere with 10 pieces of randomized letter on the inner surface. B The eye model was put on a roll of tape and the trainee can manually rotate the model to different direction for visualization of different parts of the fundus. C Test paper for objectively recording the letters in the eye model, with two versions corresponding to the left and the right eye.

A total of 8 unified eye models (half for the left eye and half for the right eye), with different letters, were made to preserve the randomization and security, 4 for the practice and the other 4 for the assessment. The quality of these models was assured prior to being used the study. The diameter of the pupil, the clarity of the lens and the visibility of the letters were all validated.

The eye model was put on a roll of tape and the trainees can manually rotate the model to different direction for visualization of different parts of the fundus (Fig. 1B).

The study design was shown in Fig. 2. After attending a 30-min standardized training of the operation of direct ophthalmoscope (Welch Allyn 3.5 V Coaxial ophthalmoscope) by an ophthalmologist (HW), the resident trainees received a 15-min demonstration of the design and the usage of the eye model, followed by practice direct ophthalmoscopy for 30 min with their fellow trainees and 30 min with the eye models under supervision by the ophthalmologist (HW).

Ophthalmoscopic skill assessment was conducted first by a checklist (Supplementary file 1) modified from a previous report by Cordeiro et al. [6]. The residents performed direct ophthalmoscopy on a simulated patient. The performance was scored by two independent trainers (XL and HW) using the checklist (10 for total score). Afterwards, the residents used the direct ophthalmoscope to visualize the fundus of one randomly selected eye model and recorded the letters in a test paper (Fig. 1C) within 5 min. They first used the right hand and the right eye when examining the right eye model, and then used the left hand and the left eye when examining the left eye model. Subsequently, the ophthalmoscopic skills of the resident ophthalmologists were assessed objectively according to records of correct letters in the corresponding area (score 1 for each piece, 10 for total score).

Finally, the residents were asked to complete a 5-point Likert-type questionnaire (Supplementary file 2) which was designed for this study and had not been used previously. This questionnaire included feedbacks on the eye models: simulation of the eyeball, ease of assembling, effectiveness for the practice, agreement of the assessment and expectation of popularization, with subjective ratings provided on a scale of 1 (“strongly disagree”) to 5 (“strongly agree”).

Statistical analysis was undertaken using SPSS software (Version 23.0). Descriptive statistics were conducted to analyse the scores of the checklist-assessment and the model-assessment, as well as the feedback ratings. Data was presented as mean ± standard deviation (SD). Data normality and homogeneity of variance were tested using the Shapiro-Wilk test and Levene test, respectively. The reliability of the checklist score was evaluated using the intraclass correlation coefficient (ICC). The correlation between the checklist score and the model-assessment score was tested by the Pearson correlation coefficient (r). The inter-model comparison was analysed using analysis of variance (ANOVA). A P value of <0.05 was considered to be significant.

We used difficulty index and discrimination index to evaluate the effectiveness of the assessment. The difficulty index was calculated as the ratio of the mean score to the full score. According to published evaluation criteria [10, 11], tests are classified as easy (>0.80), intermediate (between 0.80 and 0.30) and difficult (<0.30). The discrimination index was calculated as the difficulty index in the upper group (the top 27% scorers) minus that in the lower group (the bottom 27% scorers) [12]. Based on Ebel’s guidelines [11], tests are considered poor, acceptable, good and excellent if the discrimination index is 0–0.19, 0.2–0.29, 0.3–0.39, and above 0.4.

Seventy-six residents in ophthalmology were enrolled in the study, mean age 26.1 ± 1.68 years, 29 (38%) males and 47 (62%) females.

The residents attained good performance in checklist assessment, with checklist scores given from the two trainers, 9.34 ± 0.45 and 9.15 ± 0.51, mean score 9.25 ± 0.47. The ICC was 0.792 (P < 0.001). The difficulty index was 0.92 and the discrimination index was 0.11. On the model-based assessment, the overall score of the residents recording the letters was 4.24 ± 3.10. The difficulty index was 0.42 and the discrimination index 0.79. There was no correlation between the checklist and the model-based assessments, r = 0.133, P = 0.251.

The scores from each model (Table 1), all with the range of zero to ten, were in a normal distribution. Inter-model comparison of the objective scores revealed no significant difference among the 4 eye models (P = 0.26). When divided according to the laterality, the score of the right eye models was higher than that of the left eye models, though not statistically significant (4.92 ± 3.13 vs 3.55 ± 2.97, P = 0.054). The difficulty index and the discrimination index of the model-assessment using the 4 models ranged from 0.33 to 0.50 and 0.66 to 0.84, respectively.

Self-reported feedback on the eye model were shown in Fig. 3. Forty-two residents (57.9%) agreed or strongly agreed that the eye model had a high degree of simulation in visualization of the fundus. Forty-eight (63.2%) felt easy to assemble (rating score ≥ 4). On effectiveness of the model, 51 (67.1%) agreed or strongly agreed that the model-assessment score could reflect the ability to visualize the fundus, and 60 (78.9%) agreed or strongly agreed that practice using the eye model could improve their skills in direct ophthalmoscopy. Fifty-eight residents (76.3%) expected the eye model to become popular in medical education (rating score ≥4).

Histograms of resident ophthalmologist ratings for (A) degree of simulation in the aspect of visualization of the fundus; (B) ease of assembling the model; (C) agreement that the model-assessment could effectively reflect the ability of visualization of the fundus; (D) effectiveness for the practice; (E) expectation of popularization and application.

In this study, we developed a robust and simple eye model and found it effective in assessment of the competency of direct ophthalmoscopy when compared with the traditional checklist. The subjective checklist score was 9.25 ± 0.47, with a discrimination index of 0.11. The model-assessment score was 4.24 ± 3.10, with a discrimination index of 0.79. Notably, two-thirds of the residents agreed or strongly agreed that model-assessment could reflect the ability to visualize the fundus.

In usual practice, ophthalmoscopic skill was assessed by trainers who gave scores according to a checklist, mainly on the operating procedure [6]. In our study, the checklist for assessment could be divided into 3 parts: (a) preparation of the patients, the device and the environment; (b) operating procedure including manipulating the light, adjusting the dioptre and controlling the working distance; (c) the overall proficiency. The checklist-assessment showed a high level of performance with a mean score reaching 9.25, indicating that it was very easy for the residents to perform direct ophthalmoscopy with memorized operating procedure. On the other hand, the checklist of the operational steps appeared to be a separate measure tool compared with the model-assessment which showed the ability to visualize the letters on the fundus. It has been reported that few young ophthalmologists were able to use an ophthalmoscope effectively even though they have memorized the operational procedure [1, 5].

Competency in ophthalmoscopic skills has been a concern in medical education. Eye models, with fundus images inside, were increasingly used as adjunctive tools for task-based skill assessment [2, 8]. Though task-based assessment could reflect competency more objectively and accurately, it may be technically challenging and not suitable for students or even residents with limited, if any, clinical experience since capability to make disease diagnosis was required [5]. On the other hand, the fundus images, mainly located centrally, could hardly be used to assess the competency of inspecting the peripheral retina. Thus, a new approach is needed for a more targeted assessment of the ability to visualize the fundus.

Paul Bradley ingeniously used a table tennis ball for an eye model with five sets of text in the fundus for objective assessment of direct ophthalmoscopy in 803 undergraduate medical students in the University of Liverpool, UK [5]. In Bradley’s study, the mean score of visualizing the text was 4.4 (5 for total score, difficulty index: 0.88) with 95% confidence intervals of (4.3, 4.5) and an estimated coefficient of variation (CV) of 32%. By contrast, model assessment in our study seemed more difficult (difficulty index = 0.42), and the score distribution was more discrete (CV = 73%).

Some reasons may attribute to the difference in difficulty. First, there was no time limit in Bradley’s assessment. In contrast, the residents in our study were required to finish direct ophthalmoscopy within 5 min, which increased the degree of difficulty. Second, we used the plastic ball in brown color to form a black box, simulating the structural feature of the choroid. However, Bradley’s report did not mention how to avoid light into the pale table tennis ball from the wall, which might reduce the difficulty in manipulating the light beam. Finally, assessment by reading text was much easier than reading randomized letters, even in the same size, since the text message could be guessed by association. Fourth, the eye model in our study was 26-mm in diameter, which was closer to the diameter of a human eyeball and smaller than Bradley’s model based on a table tennis ball (40-mm approximately). Fifth, unlike Bradley’s model without a refractive component, we added a convex lens with a certain focal length to simulate an emmetropic eye, and the residents could not visualize the fundus without an ophthalmoscope.

In addition, we measured the difficulty index and discrimination index, which were frequently used in evaluating the quality of a test [12]. The higher the difficulty index, the lower is the difficulty. Tests with a difficulty index above 0.90 should be very easy and probably not worth testing. On the other hand, tests with a very low difficulty index are difficult and inappropriate for students or residents. Discrimination index describes how effectively the test differentiates between high ability and low ability students. Notably, a high discrimination index is desirable in skill assessment. The relationship between these two indexes has been recognized. Si-Mui Sim et al. found that the maximum discrimination index occurred with a difficulty index between 40 and 74% [12]. Either a very large or very small difficulty index would lead to a decrease in discrimination index. Thus, it fails to differentiate between weak and competent students. Therefore, the model-assessment in our study was moderate in difficulty (difficulty index = 0.42) and acceptable for the residents. Moreover, using our eye model to assess the ophthalmoscopic competency could well differentiate between poor and competent performers (discrimination index = 0.79), compared with Bradley’s model based on CV as a reference for discrimination index and the checklist (discrimination index = 0.11).

Inter-model comparison suggested no significant difference among the models, indicating good reproducibility in the models designed in our study. It is noteworthy that the residents seemed to get higher scores when examining the right eye model, which presumably resulted from the laterality of dominant eye and dextromanuality. Further study to confirm the relationship between the performance of ophthalmoscopy and the dominant eye and dominant hand is warranted.

Feedback from the residents showed their belief that this simple eye model was able to simulate the eyeball for fundus visualization, and that the model-assessment could reflect their ability of ophthalmoscopy. Most residents agreed this model of value as a tool for ophthalmoscopic practice and expected it to become popularly used in medical education. In addition, practicing ophthalmoscopy on our model avoids closed contact with the simulated patients and helps to prevent infection via respiratory droplets, especially in the era of COVID-19.

Also, most residents felt it easy to assemble the eye model after the demonstration and the following practice procedure. The plastic ball was low-cost and easy to buy online. Unlike Bradley’s model in that the table tennis balls had to be cut into halves, our double hemispheres were ready in the original design and factory made. No cutting was needed and we could paint and glue things in the inner surface directly, which are easy endeavors.

The strengths of this study included the design of a simple eye model with high fidelity, the objectiveness in assessment of ophthalmoscopic skill and the novel use of special indexes to evaluate effectiveness of assessment.

There are also limitations in this study. First, visualization of the fundus varied in difficulty with the anatomical structure, easier for the central than the peripheral retina. Also, visualization and location of the fundus were not easy as quite a few residents recorded the letters into the adjacent positions. In order to assess the ophthalmoscopic skill more accurately, further improvements, including weighting for scores and correction for the wrong location, are necessary. Last, the present study was also limited by the difficulties in distinguishing randomly selected letters although inter-model analysis showed no significant difference.

It should be noted that it was a cross-sectional study in a single training and assessment session. We only focused on the skill assessment, rather than skill training. Therefore, further study with a longitudinal randomized controlled design will be needed to confirm the effectiveness of the model in skill training. To explore its application, a series of eye models with various pupil sizes, dioptres and severities of refractive media opacity will be provided to simulate various clinical situations.

In conclusion, we have developed a simple and reproducible eye model to meet the needs of the objective assessment of ophthalmoscopic skills. The model-assessment accurately reflected the competency in ophthalmoscopic skill with discriminatory power to differentiate performers in different skill levels. Moreover, this model was low-cost and easy to assemble. It is potentially useful in applications in ophthalmologic education.

Direct ophthalmoscopy is an essential skill for ophthalmologists and other specialists.

However, it is difficult to assess the competency of direct ophthalmoscopy in practice and a new approach is needed to enable the assessment.

This study described a simple eye model and evaluated its effectiveness in the assessment of ophthalmoscopic competency.

The simple eye model could be used to objectively assess the competency of ophthalmoscopy and showed excellent discriminatory power to differentiate performers in different skill levels.

Chen M, Swinney C, Chen M, Bal M, Nakatsuka A. Comparing the utility of the non-mydriatic fundus camera to the direct ophthalmoscope for medical education. Hawaii J Med Public Health. 2015;74:93–95.

PubMed  PubMed Central  Google Scholar 

Chung KD, Watzke RC. A simple device for teaching direct ophthalmoscopy to primary care practitioners. Am J Ophthalmol. 2004;138:501–2.

Yusuf IH, Ridyard E, Fung THM, Sipkova Z, Patel CK. Integrating retinal simulation with a peer-assessed group OSCE format to teach direct ophthalmoscopy. Can J Ophthalmol. 2017;52:392–7.

Nagra M, Huntjens B. Smartphone ophthalmoscopy: patient and student practitioner perceptions. J Med Syst. 2019;44:10.

Bradley P. A simple eye model to objectively assess ophthalmoscopic skills of medical students. Med Educ. 1999;33:592–5.

Cordeiro MF, Jolly BC, Dacre JE. The effect of formal instruction in ophthalmoscopy on medical student performance. Med Teach. 1993;15:321–5.

Androwiki JE, Scravoni IA, Ricci LH, Fagundes DJ, Ferraz CA. Evaluation of a simulation tool in ophthalmology: application in teaching funduscopy. Arq Bras Oftalmol. 2015;78:36–39.

Swanson S, Ku T, Chou C. Assessment of direct ophthalmoscopy teaching using plastic canisters. Med Educ. 2011;45:520–1.

Levy A, Churchill AJ. Training and testing competence in direct ophthalmoscopy. Med Educ. 2003;37:483–4.

Amini N, Michoux N, Warnier L, Malcourant E, Coche E, Vande Berg B. Inclusion of MCQs written by radiology residents in their annual evaluation: innovative method to enhance resident’s empowerment? Insights Imaging. 2020;11:8.

Mitra NK, Haleagrahara N, Ponnudurai G, Judson J. The levels of difficulty and discrimination indices in type a multiple choice questions of pre-clinical semester 1 multidisciplinary summative tests.Int e-J Sci Med Educ. 2009;3:2–7.

Sim SM, Rasiah RI. Relationship between item difficulty and discrimination indices in true/false-type multiple choice questions of a para-clinical multidisciplinary paper. Ann Acad Med Singap. 2006;35:67–71.

We would like to express our deepest gratitude to the participating residents. This study was assisted by Xiaolin Zhu, Shun Lin, Zijing Huang, Binyao Chen, Jianling Yang and Mudan Lin.

This work was funded by Intramural Grant of Joint Shantou International Eye Center (20-043) and the Key Disciplinary Project of Clinical Medicine under the Guangdong High-level University Development Program (002-18119101), China. The funding body did not play any role in study design, collection, analysis and interpretation of the data and manuscript preparation.

Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong, Shantou, Guangdong, China

Hongxi Wang, Xulong Liao, Mingzhi Zhang, Chi Pui Pang & Haoyu Chen

Department of Ophthalmology and Visual Sciences, Chinese University of Hong Kong, Hong Kong, China

You can also search for this author in PubMed  Google Scholar

You can also search for this author in PubMed  Google Scholar

You can also search for this author in PubMed  Google Scholar

You can also search for this author in PubMed  Google Scholar

You can also search for this author in PubMed  Google Scholar

HYC and MZZ designed the study and the training course, and stated agreement to be accountable for all aspects of the work. HYC and HXW designed the eye model. HXW and XLL collected the data. HXW analysed and interpreted the data and drafted the manuscript. HYC critically revised the manuscript. All authors read and approved the final manuscript.

The authors declare no competing interests.

This study was conducted at the Joint Shantou International Eye Center (JSIEC) of Shantou University and the Chinese University of Hong Kong. Ophthalmology residents were recruited to participate in the study during 2020–2021. The study was conducted in compliance with the Declaration of Helsinki, and has obtained approved by the Human Medical Ethics Committee of JSIEC. All participants provided written informed consent.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Wang, H., Liao, X., Zhang, M. et al. A simple eye model for objectively assessing the competency of direct ophthalmoscopy. Eye (2021). https://doi.org/10.1038/s41433-021-01730-8

DOI: https://doi.org/10.1038/s41433-021-01730-8

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Eye (Eye) ISSN 1476-5454 (online) ISSN 0950-222X (print)