How AI can help address disparities in diabetic eye exams
By Mozhdeh Bahrainian, MD, Tin Yan Alvin Liu, MD, Risa M. Wolf, MD, and Roomasa Channa, MD
Diabetes is a global epidemic and diabetic retinopathy is a major cause of vision loss, particularly among working-aged adults. In 2021, about 9.6 million people had DR in the US and 1.8 million people had vision threatening diabetic retinopathy.1 Racial/ethnic minorities and socioeconomically disadvantaged individuals with diabetes are at especially high risk of vision loss, and DR is more prevalent among Blacks and Mexican Americans compared to Whites: 39, 34 and 26 percent, respectively.2,3
Screening and early treatment of DR can prevent vision loss in 90 percent of cases.4 However, in the United States, at best about 60 percent of people with diabetes receive an annual eye exam.5 These rates are consistently lower among racial/ethnic minorities and socioeconomically disadvantaged communities.6-8 Even after adjusting for socioeconomic status and insurance, Hispanics with diabetes are less likely to visit an eye doctor than Whites.9
Barriers to provider-based screening
In the United States, referring patients with diabetes to eye-care providers for a diabetic eye exam has been the usual method of screening for DR.
However, patients report many barriers to getting the recommended screening, including the need to make an additional appointment to see an eye doctor, miscommunication about the need for a diabetic eye exam and cost, all of which are accentuated for socioeconomically disadvantaged communities.6,10 This method of screening for diabetic eye disease isn’t scalable and perpetuates disparities in the much-needed access to diabetic eye care.
How AI may overcome these barriers
Artificial intelligence-based screening is a promising method for detection of referable diabetic eye disease at the primary care provider’s office. This is potentially an effective solution as 80 percent of patients with diabetes see their primary care providers, making these visits an excellent opportunity to conduct eye screening.11
This is even more important for our nation’s medically underserved patients who often present to Federally Qualified Health Centers (FQHCs) for their primary care. FQHCs provide care to more than 30 million Americans, including one in three people living in poverty, one in five rural residents and more than 60 percent of racial/ethnic minorities.12,13 At this time, 70 percent of FQHCs don’t have eye-care providers on site.14
AI-based diabetic eye screening programs have been associated with improved screening rates as well as follow-up with recommended eye care.15-17 Importantly, implementation of AI-based eye screening has been shown to improve diabetic eye screening rates across racial/ethnic and socioeconomically disadvantaged groups and has shown the potential to close the current gaps in diabetic eye care.18-21
Existing autonomous AI platforms
Three autonomous AI platforms have been approved by the U.S. Food and Drug Administration for diabetic eye testing. They are:
IDx-DR system, now known as LumineticsCore (Digital Diagnostics), was cleared by the FDA in 2018. The system has demonstrated an 87.4-percent sensitivity, 89.5-percent specificity and 96-percent imageability for detecting more-than-mild DR (mtmDR),22 defined as Early Treatment Diabetic Retinopathy Study level of 35 or higher.
EyeArt system (Eyenuk) was cleared by the FDA in 2020 to detect mtmDR and vision-threatening diabetic retinopathy (vtDR). It has shown 96-percent sensitivity, 88-percent specificity and 97- percent imageability for detecting eyes with mtmDR, and 97-percent sensitivity and 90-percent specificity for detecting vtDR,23 defined as ETDRS level of 53 or higher, but not equal to 90 and/or presence of clinically significant macular edema (CSME).
AEYE Health’s AI-based system for detection of mtmDR, which demonstrated a 93-percent sensitivity and 91.4-percent specificity for detecting mtmDR.24
|Figure 1. The estimated numbers of patients with diabetes progressing to severe vision loss per 100,000 at five years in various scenarios, as estimated using Markov modeling comparing no-screening, eye-care provider (ECP)-based screening, artificial intelligence-based screening and AI-based screening maximized for adherence. AI-based screening has the potential to prevent vision loss in about 27,000 more Americans with diabetes compared to ECP-based screening. Base case estimates are based on parameters as close to the real world as possible. Maximized for adherence estimates are based on parameters maximized for adherence with screening, follow-up and recommended treatments.|
Challenges in adopting AI-based eye screening
While uptake of AI-based eye screening in specialized pediatric endocrine clinics improved screening rates to over 90 percent, screening uptake in adult primary care clinics remains about 60 percent.19
Multiple factors are likely to affect AI-based screening uptake in the clinic and a team-based approach is required to address these barriers.25 These factors include lack of clarity regarding reimbursement and multiple competing demands in adult primary-care clinics with limited
The CPT code 92229 was approved in 2021 to reimburse for “imaging of retina for detection or monitoring of disease; point-of-care autonomous analysis and report.”26 As awareness regarding this code increases, more clinics may be able to realize the return on investing in AI-based screening for diabetic eye disease.
Implementation strategies using established frameworks, incorporating the needs of multiple stakeholders, are needed to address implementation barriers and optimize the initial and sustained uptake of AI-based screening.
Potential impact of AI-based screening
Our team developed a simulation to estimate vision loss prevented using eye-care provider vs. AI-based screening. The model showed that if AI were to replace the current eye-care provider-based system of screening, severe vision loss could be prevented in 90/100,000 individuals with diabetes at five years (Figure 1). This translates to at least 27,000 Americans over five years, assuming 34 million Americans have diabetes.27
This effect is likely to increase because the number of people with diabetes is projected to grow. Furthermore, this effect can be multiplied manifold if AI-based eye screening is adapted to local needs and downstream aspects of care, such as follow-up with recommended eye care and strategies to promote adherence with metabolic control and ophthalmic follow-up, are optimized.
Artificial intelligence-based screening is a promising approach to promote timely detection of referable diabetic retinopathy and decrease disparities in vision loss from diabetes. RS
1. Lundeen EA, Burke-Conte Z, Rein DB, et al. Prevalence of diabetic retinopathy in the US in 2021. JAMA Ophthalmol. 2023;141:747-754.
2. American Diabetes Association. Focus on Diabetes: Look closer at eye health. May is health vision month ... Did you know? Updated May 2022. Available at: https://diabetes.org/sites/default/files/2022-04/FOD
HVM 0.pdf. Accessed August 31, 2023.
3. Zhang X, Saaddine JB, Chou CF, et al. Prevalence of diabetic retinopathy in the United States, 2005-2008. JAMA. 2010,304:649-656.
4. Ferris FL, 3rd. How effective are treatments for diabetic retinopathy? JAMA. 1993;269:1290-1291.
5. Flaxel CJ, Adelman RA, Bailey ST, et al. Diabetic Retinopathy Preferred Practice Pattern. Ophthalmology. 2020;127:P66-P145.
6. Thomas CG, Channa R, Prichett L, Liu TYA, Abramoff MD, Wolf RM. Racial/ethnic disparities and barriers to diabetic retinopathy screening in youths. JAMA Ophthalmol. 2021;139:791-795.
7. Shi Q, Zhao Y, Fonseca V, Krousel-Wood M, Shi L. Racial disparity of eye examinations among the U.S. working-age population with diabetes: 2002–2009. Diabetes Care. 2014;37:1321-1328.
8. Fathy C, Patel S, Sternberg P, Jr., Kohanim S. Disparities in adherence to screening guidelines for diabetic retinopathy in the United States: A comprehensive review and guide for future directions. Semin Ophthalmol. 2016;31:364-377.
9. Marcondes FO, Cheng D, Alegria M. Are racial/ethnic minorities recently diagnosed with diabetes less likely than white individuals to receive guideline-directed diabetes preventive care? BMC Health Serv Res. 2021;21:1150.
10. Fairless E, Nwanyanwu K. Barriers to and facilitators of diabetic retinopathy screening utilization in a high-risk population. J Racial Ethn Health Disparities. 2019;6:1244-1249.
11. Gibson DM. Estimates of the percentage of US adults with diabetes who could be screened for diabetic retinopathy in primary care settings. JAMA Ophthalmol. 2019;137:440-444.
12. Health Resources & Services Administration. Health Center Program: Impact and Growth. Updated August 2023. Available at: https://bphc.hrsa.gov/about-health-centers/health-center-program-impact-growth. Accessed August 31, 2023.
13. Health Resources & Services Administration. National Health Center Program Uniform Data System (UDS) Awardee Data. Undated. Available at: https://data.hrsa.gov/tools/data-reporting/program-data/national. Accessed August 31, 2023.
14. Shin P, Finnegan B. Assessing the need for on-site eye care professionals in community health centers. Policy Brief George Wash Univ Cent Health Serv Res Policy. 2009:1-23.
15. Liu J, Gibson E, Ramchal S, et al. Diabetic Retinopathy screening with automated retinal image analysis in a primary care setting improves adherence to ophthalmic care. Ophthalmol Retina. 2021;5:71-77.
16. Mathenge W, Whitestone N, Nkurikiye J, et al. Impact of artificial intelligence assessment of diabetic retinopathy on referral service uptake in a low-resource setting: The RAIDERS randomized trial. Ophthalmol Sci. 2022;2:100168.
17. Wolf RM, Liu TYA, Thomas C, et al. The SEE study: Safety, efficacy, and equity of implementing autonomous artificial intelligence for diagnosing diabetic retinopathy in youth. Diabetes Care. 2021; 44:781-787.
18. Leong A, Wang J, Wolf R, et al. Autonomous artificial intelligence (AI) increases health equity for patients who are more at risk for poor visual outcomes due to diabetic eye disease (DED). Invest Ophthalmol Vis Sci. 2023;64:242.
19. Huang J, Wang J, Channa R, Wolf R, Abramoff MD, Liu TYA.Autonomous artificial intelligence exams are associated with higher adherence to diabetic retinopathy testing in an integrated healthcare system. Invest Ophthalmol Vis Sci. 2023;64:212.
20. Liu TYA, Huang J, Lehmann H, Wolf RM, Channa R, Abramoff MD. Autonomous artificial intelligence (AI) testing for diabetic eye disease (DED) closes care gap and improves health equity on a systems level. Diabetes. 2023;72:S1 261.
21. Zehra A, Bromberger AL, Pan B, et al. Autonomous artificial intelligence diabetic eye exams to mitigate disparities in screening completion. Diabetes. 2023;72:S1 110.
22. Abramoff MD, Lavin PT, Birch M, Shah N, Folk JC. Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices. NPJ Digit Med. 2018;1:39.
23. Ipp E, Liljenquist D, Bode B, et al. Pivotal evaluation of an artificial intelligence system for autonomous detection of referrable and vision-threatening diabetic retinopathy. JAMA Netw Open. 2021;4:e2134254.
24. U.S. Food and Drug Administration. Letter to AEYE Health. November 10, 2022. Available at: https://www.accessdata.fda.gov/cdrh docs/pdf22/K221183.pdf. Accessed August 31, 2023.
25. Zafar S, Mahjoub H, Mehta N, Domalpally A, Channa R. Artificial intelligence algorithms in diabetic retinopathy screening. Curr Diab Rep. 2022;22:267-274.
26. Centers for Medicare and Medicaid Services. Billing and Coding: Remote imaging of the retina to screen for retinal diseases. Updated January 1, 2023. Available at: https://www.cms.gov/medicare-coverage-database/view/article.aspx?articleid=58914#:~:text=CPT%C2%AE%2092229%20allows%20coverage,and%20report%2C%20unilateral%20or%20bilateral. Accessed August 31, 2023.
27. Channa R, Wolf R, Abràmoff MD, P Lehmann H: Effectiveness of artificial intelligence screening in preventing vision loss from diabetes: A policy model. NPJ Digital Medicine. 2023;6:53.
How AI with home-based OCT may change the nAMD treatment paradigm
By Miguel Busquets, MD
Home-based optical coherence tomography partnered with artificial intelligence represents a paradigm shift for high-frequency monitoring of neovascular age-related macular degeneration in that it gives retina specialists a tool that they can use to personalize management to each patient’s tolerance for fluid.
As longer-acting treatments evolve, retina specialists are trying to find ways to optimize treatment intervals. Tracking fluid on a daily basis with home-based OCT combined with AI to analyze findings has the potential to be a powerful tool to do so.
We recently reported on a study that investigated AI-derived fluid volume trajectories in nAMD patients using daily monitoring with the Notal Vision Home OCT (NVHO).1 The purpose was to evaluate fluid dynamics during the reactivation-to-time-of-treatment and treatment-to-response intervals and to analyze the impact treatment delay had on treatment response (Figure 2). Our patient data demonstrated strong heterogeneity both in fluid recurrence and resolution patterns.
|Figure 2. A Home OCT screen view of three measures that provide insights into fluid trends. (Courtesy Notal Vision).|
Tracking fluid daily
The best way to optimize treatment intervals is to track fluid daily. Adding AI modules and platforms to analyze and quantify fluidics amplifies our assessment capabilities.
As retina specialists, we need to take that information and correlate it with vision and patient symptoms to determine what the ideal interval is. We illustrated this point with two cases: one patient who required very tight management of fluid and another who could tolerate large amounts of subretinal fluid and still maintain 20/20 vision.
We generally have no way of knowing whether patients, like the one in our second example, would maintain good vision. However, with NVHO we could determine that the patient tolerated a certain level of fluid, allowing for greater flexibility with dosing intervals. That patient could have gone five to six weeks without needing another injection. But using today’s standard of care without the information NVHO can generate, the patient would have been brought in monthly.
This type of approach is significant in the era of extended treatment regimens. It’s a matter of patient convenience and access because it means not bringing patients into the clinic who don’t need injections and freeing up clinic time for patients who do need them. Payers also are demanding this level of efficiency, creating a constant tug-of-war for retina specialists.
Quantifying treatment responses
In our study, we manually annotated phases of fluid volume trajectory, which resulted in 35 reactivations and 48 responses from 54 patients and 57 eyes. Expert graders manually segmented reactivation and resolution periods. The study quantified treatment response for two groups: patients treated within seven days of recurrence; and patients treated after seven days from the time of recurrence.
The mean (standard deviation) reactivation phase duration was 12 (10) days with a mean fluid increase rate of 12 (18) nL/day. The mean response phase duration was 11 (8) days with a mean fluid reduction rate of 8 (9) nL/days.
When we divided the events according to treatment timing, measured as <1 week or >1 week from the beginning of the reactivation phase, the groups had a significant difference in mean volume at treatment [36 vs. 139 nL (p<0.003)], as well as in mean time to fluid resolution [4.7 vs. 13.6 days (p<0.02)]. The mean area under the curve was 76 and 769 nL-days (p<0.00001) for treatment timing, respectively.
Machine learning is being used to identify the presence or absence of fluid. Based on our findings, AI outperforms human readers in this regard. AI can further improve our evaluation process by not only assessing the presence or absence of fluid, but also fluid volume over time, which provides a new parameter for evaluating patients with exudative disease. Tracking fluid volume, and not only its presence or absence, along with the central subfield thickness adds a new dimension to our diagnostic repertoire.
The next steps in the use of this technology may include the incorporation of multivariate analyses and predictive modeling. Can AI extrapolate that information into an ideal treatment algorithm for a patient that involves interval selection and drug selection? Based on research, it certainly has the potential.
By tying multiple diagnostic elements together—fluid volume, treatment outcomes, visual acuity and patient symptoms—AI has the potential to use multivariate analysis to determine ideal treatment selection and interval. This may be the next level of AI utilization. RS
How machine-learning models may improve management of CRVO
By Yasha Modi, MD
Machine learning in retina is in its infancy. Food and Drug Administration approval exists only for the screening of diabetic retinopathy, and there are no prognostic-focused algorithms. For this technology to become meaningful to physicians, machine-learning (ML) functionality will need to scale up to provide diagnostic and prognostic insights across a host of retinal diseases. This effort will involve validating diagnostic accuracy across different disease states as well as for prognostic guidance within each disease state.
We reported on a recent collaborative project aimed to provide a proof of concept on prognostic capabilities of ML using a robust Phase III randomized clinical trial (RCT) data set, COPERNICUS AND GALILEO.1 The trials evaluated 2-mg aflibercept for the treatment of macular edema due to central retinal vein occlusion. Patients received monthly treatment for the first six months and then were transitioned to a pro re nata approach.
The transition from monthly to PRN treatment allowed us to create and test an algorithm to predict one-year outcomes. Could the algorithm predict visual acuity or change in VA? Could it predict central subfield thickness or dosing frequency during the PRN arm and could it provide insights into what weights are being assessed in these predictions?
It’s certainly unusual to use a Phase III dataset as the model to train an artificial-intelligence algorithm. This is, in part, due to the small dataset size.
However, unlike large real-world datasets that tend to have large amounts of missing data or incorrect information (e.g. data carried forward in the electronic medical record), RCT data have a very high accuracy. This can potentially amplify the signal-to-noise ratio, which can be diluted in incomplete or inaccurate datasets.
COPERNICUS and GALILEO (n=351) randomized patients 3:2 to treatment vs. sham. The studies obtained extensive baseline demographics, medical characteristics including laboratory values and multiple postbaseline outcomes for each patient.
Using a random forest model, we opted to evaluate the following parameters: absolute best-corrected visual acuity and BCVA change at week 52; change in CST at week 52; and intravitreal aflibercept injection dosing frequency from week 24 through week 52.
ML model showed a high degree of accuracy
The model was trained using 80 percent of the dataset (n=159) to learn and refine outcome predictions. The remaining dataset (n=39) was used to establish validation and performance metrics of the ML models. The models used patient data for 47 baseline features, including demographics and laboratory values, and 13 postbaseline factors up to 24 weeks.
The ML model predicted absolute BCVA at week 52 with 87-percent accuracy, mainly driven by BCVA at weeks 16, 20 and 24. In a patient-to-patient comparison, the absolute BCVA at week 52 was 4.5 to 4.6 letters larger for every 5-letter increase in absolute BCVA at weeks 16, 20 or 24.
For predicting BCVA change at week 52, the model demonstrated 76-percent accuracy. The patient-to-patient comparison found a 1.7-letter less gain in BCVA at week 52 for every 5-letter increase in baseline BCVA. In contrast, there was a 1.3 letter greater gain in BCVA at week 52 for every 5-letter increase in BCVA at weeks 20 and 24, respectively.
The ML model also was able to predict change in CST from baseline at week 52 with high accuracy (r=0.76). The key predictive factors were baseline CST and BCVA. Interestingly, however, the algorithm wasn’t able to reliably detect absolute CST at week 52.
The ML model predicted PRN injection frequency from weeks 24 through 52 with an 83-percent accuracy, with CST at baseline and at week four serving as the key drivers. The patient-to-patient comparison showed the odds of receiving two or fewer PRN injections was 10 percent less likely for every 50-µm increase in baseline CST. The odds of receiving two or fewer PRN injections was 20 percent less likely for every 50-µm increase in CST at week four.
This proof-of-concept model demonstrated that reasonable predictions can be made on small but clean datasets. Certainly, the accuracy of this algorithm could be improved using additional datasets and building on its framework. ML models have the potential to assist us when we’re discussing prognosis with patients with CRVO. RS
1. Modi Y, Mehta N, Du W, et al. Predicting outcomes and treatment frequency following monthly intravitreal aflibercept for macular edema secondary to central retinal vein occlusion: A machine learning model approach. Paper presented at the American Society of Retina Specialists 40th annual scientific meeting; New York, NY; July 16, 2022.