![]() |
Bio Dr. Sridhar is an associate professor of clinical ophthalmology at Bascom Palmer Eye Institute, Miami. DISCLOSURE: Dr. Sridhar is a consultant to Alcon, DORC, Genentech/Roche and |
One of the exciting but daunting technological innovations that is entering the social media landscape exponentially is artificial intelligence. AI has tremendous potential, but it’ll also require greater scrutiny from the consumer using and interpreting its output. Moreover, AI-generated images and content may not be accurate, causing us to challenge the old aphorism, “I know it when I see it.”
One major social media issue has been so-called “deepfakes”: images and/or videos that have been digitally altered in a manner that may make it difficult to detect if and how they were modified. Artificial intelligence has accelerated the ease of generating deepfakes due to deep learning algorithms that can either create false images or merge two existing images into a new one. One famous recent example in the pop culture arena was the deepfake controversy that occurred in January 2024 surrounding singer Taylor Swift. False images were released on social media depicting Swift in realistic sexual and violent scenarios and were viewed tens of millions of times.
While this unfortunate episode brought greater awareness of the negative possibilities of deepfakes, we still don’t have good, widely available software to detect and flag them. Massachusetts Institute of Technology has created a website allowing users to practice detecting deepfakes (Detect Fakes).1 The team points out several pearls for identifying fakes, including looking for excessively smooth or wrinkly facial skin, abnormal shadowing of a face, abnormally high or low glare off of glasses, and lip movements in videos out of sync with audio.
Much like in the rest of the science world, AI-generated deepfakes may be a double-edged sword for retina providers. One study2 described using a deep learning model to generate synthetic images of retinopathy of prematurity and then testing trained ROP experts to see if they could distinguish real from synthetic images. Their results concluded that experts couldn’t reliably detect the deepfake pictures.
The authors appropriately concluded the positive and negative implications: On the one hand, the ability to generate large amounts of synthetic images could facilitate training of ophthalmologists and future AI neural networks. On the other hand, there’s now unfortunate potential for academic dishonesty.
Given the allure of publication and presentation (not to mention the financial incentives that can potentially accompany elevating an industry-based product over another), we should be prepared to look at every fundus photo and surgery video posted online, shown at a meeting, or on the cover of our major journals with a discerning and skeptical eye to make sure it hasn’t been falsified. It illustrates a need to be filled: accurate and easy-to-use software that can detect modified retinal images and videos. Hopefully one of the brilliant readers of this issue of Retina Specialist is on the case! RS
REFERENCES
1. https://www.media.mit.edu/projects/detect-fakes/overview/
2. https://pubmed.ncbi.nlm.nih.gov/36246951/