Concerns Raised Over AI Imaging Devices in Ophthalmology

Thu 19th Jun, 2025

Recent research has highlighted significant evidence gaps concerning artificial intelligence (AI) eye imaging devices that have received regulatory approval for patient care. A review conducted by experts from University College London (UCL) and Moorfields Eye Hospital examined 36 AI systems approved as medical devices in Europe, Australia, and the United States, revealing concerning disparities in data transparency and clinical performance.

The findings, published in the journal npj Digital Medicine, indicate that nearly 19% of the reviewed devices lack any peer-reviewed data on their accuracy or clinical outcomes. Among the remaining devices, a mere 52% of studies provided information on patient age, 51% on sex, and only 21% on ethnicity. This lack of demographic reporting raises questions about the generalizability of these AI tools across diverse populations.

Furthermore, the majority of validations relied on historical image datasets, which often displayed limited diversity and inadequate reporting of essential demographic characteristics. Only 8% of studies compared AI tools against each other, and just 22% assessed them against standard human care practices. Alarmingly, only 11 out of 131 studies were interventional, meaning they tested devices in real-world clinical scenarios that could directly influence patient care.

Most AI tools evaluated in the study focus on diabetic retinopathy screening, with many overlooking other critical eye conditions. The analysis also noted a significant imbalance in regulatory approvals, with 97% of devices having received clearance in the European Union, compared to only 22% in Australia and a mere 8% in the United States. This inconsistency suggests that devices authorized in one region may not adhere to the same standards in others.

The researchers emphasize the need for more robust and transparent evidence to support the efficacy of these AI tools. They advocate for adherence to the FAIR principles--Findability, Accessibility, Interoperability, and Reusability--to mitigate biases stemming from a lack of transparency. The lead author of the study underscored the potential of AI to bridge the gap in eye care, especially in areas with a shortage of specialists. However, this potential can only be realized if the AI systems are developed on a solid evidence-based foundation.

To enhance user confidence and facilitate clinical integration, the authors propose that manufacturers provide clearer data on their AI tools, including detailed reporting of model development and testing results. They also highlight the importance of regulatory frameworks, such as the EU AI Act, which could raise standards for data diversity and real-world testing.

In conclusion, the study aims to guide policymakers and industry leaders in ensuring that AI applications in eye care are effective and equitable. By implementing robust oversight, the promise of faster and more accurate identification of eye diseases can be fulfilled, ensuring no patient population is left behind.


More Quick Read Articles »
OSZAR »