A few years before she died, my mother began to lose her vision to macular degeneration. As her central vision blurred, she had to give up her driver's license. At first she could read with ever-larger magnifiers, but later she couldn't do that, either. Eventually, even recognizing faces was a trial.

Age-related macular degeneration, known as AMD, is a leading cause of vision loss and blindness for Americans over 50. There's no cure. The image above gives you an idea of what a scene - two small boys holding balls - might look like to someone with the disease.

'People (with AMD) are suffering. Everywhere you look, there's a blurry spot in the center,' said Dr. Aaron Lee, a University of Washington opthamologist and assistant professor who specializes in problems of the retina.

Ideal for AI

Lee believes AI can help - not just people with AMD, but those with eye diseases that cause vision loss.

GPU-accelerated deep learning may be able to detect signs of disease that doctors miss, or speed diagnosis so doctors can start treatments sooner, he said. He's already developed deep learning algorithms that spot AMD and macular edema, a condition that damages central vision.

Of all medical fields, ophthalmology is among the best suited to benefit from GPU-accelerated deep learning, Lee said. Not only do ophthalmologists collect the massive amounts of data needed to train a neural network, but that data is highly standardized across the field.

'Something Amazing'

Lee and his team focus on a test called optical coherence tomography (OCT), which uses light waves to take cross-section pictures of the retina. Doctors perform more than five million OCT tests a year to diagnose conditions such as AMD, glaucoma and diabetic retinopathy. In diabetics, high blood sugar levels can damage the blood vessels in the retina and affect sight.

Researchers linked 100,000 patient OCT scans to their electronic health records to create the AMD-detecting algorithm. They trained a neural network to identify patients with AMD - reaching an accuracy rate of 93 percent - using the CUDA parallel computing platform and our GeForce GTX TITAN X GPUs with the cuDNN-accelerated Python Caffe deep learning framework.

That AMD algorithm, completed in only three weeks, dispelled Lee's skepticism about the advantages of GPU-accelerated deep learning.

'I saw there was something amazing going on here,' he said. 'It would have been impossible using regular computer architecture to process a dataset of that size and train a neural network as large as the one that we used.'

AI Matches the Experts

Delighted with those results, Lee added computing power with eight NVIDIA Tesla P100 GPUs to tackle the difficult challenge of identifying intraretinal fluid (IRF) in OCT scans. IRF, which can steal sight, happens when blood vessels in the retina get damaged. Doctors watch IRF to determine how well patients are responding medication and whether they're improving.

The team trained a neural network to identify IRF at a pixel-by-pixel level - currently a manual process that relies on doctors' judgment. Their algorithm performed as well as experts and would give doctors a way to objectively track how much patients improve over time.

'We're on the precipice of using deep learning to show us features in images that we as doctors were blind to,' Lee said.

Sights Set on AI

But Lee sees much more opportunity for AI to transform ophthalmology.

He expects it to detect eye disease faster and more efficiently so doctors can spend more time treating patients. It could help address a growing shortage of doctors available to treat an aging population or provide care to people in regions where doctors are scarce. And it could lead to new insights into the causes of AMD and other diseases.

'AI is going to play a big role in how patients are treated in the future,' Lee said.

For more information, read Lee's papers on his research:

The researchers open-sourced their work on github at https://github.com/uw-biomedical-ml/oir and https://github.com/uw-biomedical-ml/irf-segmenter.

Nvidia Corporation published this content on 18 December 2017 and is solely responsible for the information contained herein.
Distributed by Public, unedited and unaltered, on 18 December 2017 14:24:05 UTC.

Original documenthttps://blogs.nvidia.com/blog/2017/12/18/sight-for-sore-ais-how-deep-learning-detects-eye-disease/

Public permalinkhttp://www.publicnow.com/view/9381B1A00986B87CE8528393CCF91D7FAB989E75