Deep learning plus radiologist oversight boosts efficiency of liver lesion segmentation

When manually corrected by radiologists, an AI system for automatically detecting and segmenting colorectal metastases in the liver can improve interpretative efficiency, according to a study published online March 13 in Radiology: Artificial Intelligence.

Senior study author An Tang, MD, and colleagues at the University of Montreal arrived at their findings after evaluating an automated system built on a fully convolutional network. They trained the system on contrast-enhanced CT images of 261 lesions, then validated it on 22 lesions and tested it on 105.

Using as a reference standard manual detection and segmentation by a fellowship-trained abdominal radiologist with eight years of experience, the team found per-lesion sensitivity for lesions smaller than 10 millimeters was very low with automated segmentation (0.10) but was higher for user-corrected segmentation (0.30 to 0.57) and manual segmentation (0.58 to 0.70).

Defining efficiency as user interaction time for manual segmentation compared with that of user-corrected segmentation, they found mean interaction time was 7.7 minutes ± 2.4 minutes per case for manual segmentation and 4.8 minutes ± 2.1 minutes per case for user-corrected segmentation. Automated run time was around 1 second.

Similarly, user interaction time was significantly shorter for user-corrected and fully automated segmentation than for manual segmentation.

Tang et al. concluded that the deep learning method they studied “shows promise for facilitating detection and segmentation of colorectal cancer liver metastases, thus augmenting the work of radiologists while improving efficiency and providing similar variability.”

User correction of automated segmentations “can generally resolve deficiencies of fully automated segmentation for small metastases and is faster than manual segmentation,” they added.

Among the limitations in their study’s design, the authors acknowledged the use of manual segmentation as their reference standard, as this can lead to variability depending on the reader.

However, they noted, their volume and segmentation data drew from seven separate sites, combining interpretations from multiple radiologists, and so could generalize across the different segmentation examples on different volumes from different radiologists.

Importantly, when it came to estimating lesion volumes, the human-machine combo performed no better in the study than humans alone.

Still, tumor segmentation “is a tedious and time-consuming task susceptible to intra- and interoperator variability if performed manually by a human,” Tang and colleagues pointed out. “Hence, there is an emerging opportunity for automated tumor segmentation to address [many] shortcomings.”

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Trimed Popup
Trimed Popup