Abdominal ultrasound tests for organ abnormalities haven't changed much in the past decade, with a doctor moving a wand over the patient's abdomen to gaze at blurry images. But the process could get accelerated by a thousand times with improved accuracy, based on deep learning work by U.S. researchers.

Tests typically take around half an hour. That may not seem like a long time, but given that hospitals do thousands of these scans every year, they end up spending massive amounts of time providing this procedure when they could be seeing more patients.

These examinations, done to diagnose abnormalities in various internal organs including the kidneys, liver or gallbladder, require substantial effort from practitioners who must find the right angle for ultrasound imaging, annotate these views in text and record relevant measurements.

Researchers from Siemens and Vanderbilt University are working to automate these tasks with the help of deep learning. They've used NVIDIA TITAN X GPUs and the cuDNN -accelerated PyTorch deep learning framework to develop the first system that can simultaneously classify and detect organs and any abnormalities.

With their model, the entire process is also much faster - patients may never need to sit through an abdominal ultrasound that lasts longer than a few seconds. That means that in the time it would take a hospital to perform one ultrasound, the team's system can perform nearly 30.

'My goal is to develop a number of robust and efficient medical image analysis algorithms to understand the large-scale medical image data,' said Yuankai Huo, lead researcher on the study and research assistant professor of electrical engineering and computer science at Vanderbilt University.

To date, previous attempts at automating medical imaging processes have deployed multiple networks - one for each task of classification and landmark detection. However, these efforts have been impractical due to the limited computational and memory resources on most ultrasound scanners.

To overcome those limits, the research team's new deep learning-based system handles all tasks through a single network, increasing efficiency and practicality. They trained the system using over 187,000 images from 706 patients, which was only possible because NVIDIA TITAN X GPUs are able to speed things up at a much faster rate.

In an experimental study where the system was tasked with classifying and detecting organs in patients' scans, the team's research and development efforts were rewarded: Their system not only outperformed previous neural networks, but also outperformed a human expert in correctly diagnosing abnormalities.

'The improvements in computational power enabled by NVIDIA GPUs are helping us to achieve scientific goals that were impossible to target previously,' Huo said. 'I feel the entire medical image analysis community has been reshaped by the advancement of computational capability.'

Thanks to advances in the medical field and deep learning technology, patients may never have to sit through lengthy ultrasounds again, and doctors will have more time to consult with patients and develop appropriate treatment plans.

Attachments

  • Original document
  • Permalink

Disclaimer

Nvidia Corporation published this content on 15 June 2018 and is solely responsible for the information contained herein. Distributed by Public, unedited and unaltered, on 15 June 2018 15:37:07 UTC