Google AI algorithm may improve chest x-ray interpretation, radiologist efficiency

New research from Google, presented at MIT Technology Review's EmTech Digital 2018 conference in San Francisco on March 27, may point to reducing radiologist-annotated images required to train a deep learning algorithm for medical imaging applications.   

Featured speaker Jia Li, head of research and development at Google Cloud, discussed how the newly developed deep learning algorithm may help radiologists more efficiently interpret chest x-rays by simultaneously identifying and localizing diseases, according to an article by MIT Technology Review.

"[Deep learning] outperforms the state-of-the-art machine-learning technology for disease prediction, and more importantly, it generates insights about the decision that has been made to assist better interpretability of the result," Li told the EmTech audience. 

Li and her colleagues believe machine learning may help identify diseases in clinical settings where data are limited and reasoning for diagnosis is needed by doctors, in accordance with their research findings.  

"We propose a new approach by combining the overall disease types and the local pixel-wise indication of disease without additional detailed labeling effort," Li and colleagues explained in their study, published on arXiv.org. "The resulting solution generates overall prediction of the disease type as well as the abnormal area of the disease." 

The chicken and the egg  

But because researchers only had a small training data set to use  to feed the system, another data set was used to establish the deep learning process and ensure that the area of an image critical to diagnosing an abnormality was clearly identified. This presents a "chicken and the egg" problem, Li said.  

"On one hand, radiologists are spending a large amount of time analyzing the radiology images, as we want to build AI-powered tool to assist their diagnosis. But, in order to do so, traditionally we'll need a large amount of data," Li explained during the EmTech conference. "That goes back to the exact problem that we want to solve and put back the burden to our radiologists again to label large amounts of data."  

The research  

Li and colleagues utilized the U.S. National Institutes of Health (NIH) ChestX-ray8 database which openly contains more than 110,00 chest x-ray studies associated with up to 14 different types of diseases. Li also explained at the conference that this dataset includes 880 images with 984 annotated "bounding boxes,” which enclosed eight different types of diseases.

Researchers used 880 images with annotations and another 111,240 images without annotations (labeled as having a disease but without any localization information of the disease on the image) to train the deep learning model.  

Li and colleagues applied a convolutional neural network to classify an image and identify the type of disease and its localization, which was captured through slicing the image into a "patch" grid to capture localization information. In turn, researchers were able to combine the processes of diseases identification and localization into the same prediction model. 

"For an image with bounding box annotation, the learning task becomes a fully supervised problem since the disease label for each patch can be determined by the overlap between the patch and the bounding box," Li et al. wrote. "For an image with only a disease label, the task is formulated as a multiple instance learning problem - at least one patch in the image belongs to that disease." 

By comparing the performance of their prediction model on the NIG dataset with NIH's deep-learning algorithm ResNet-50, researchers found that their model contained a higher area under the curve (AUC) than the NIH algorithm in detecting 14 abnormalities in chest x-rays.  

AI and the need for radiologists 

According MIT Technology Review, Li explained that artificial intelligence (AI) can automate only a small portion of a radiologist's work. Understanding a patient's specific case history, communicating a diagnosis, and determining correct treatment options are all essential to developing accurate machine learning systems.  

Li also explained that AI will not replace doctors in the near future. Instead, it can assist doctors in decision making and improve efficiency in interpretation. Her team's research suggests that medicine may be a major focus for the deep learning cloud platform and highlights the challenges in applying AI to real-world medical situations, according to MIT Technology Review.  

"Our quantitative results show that the proposed model achieves significant accuracy improvement over the published state-of-the-art on both disease identification and localization, despite the limited number of bounding box annotations of a very small subset of the data," the authors wrote. "In addition, our qualitative results reveal a strong correspondence between the radiologist's annotations and detected disease regions, which might produce further interpretation and insights of the diseases."

""

A recent graduate from Dominican University (IL) with a bachelor’s in journalism, Melissa joined TriMed’s Chicago team in 2017 covering all aspects of health imaging. She’s a fan of singing and playing guitar, elephants, a good cup of tea, and her golden retriever Cooper.

Trimed Popup
Trimed Popup