AI identifies cancerous regions on OCT breast tissue images

A deep learning classification approach can identify cancerous regions from benign areas in optical coherence tomography (OCT) images of breast tissue, according to results of a July 17 study published in Academic Radiology.

The convolutional neural network (CNN) utilized a binary classification method to classify four types of breast tissue, achieving an overall accuracy of 94%, reported Diana Mojahed, MS, of Columbia University’s Department of Biomedical Engineering in New York, and colleagues. Importantly, the automated method can overcome the high interobserver rates that typically plague OCT image interpretation.

“These results further validated the practical feasibility to use OCT as a real-time intraoperative margin assessment tool in breast-conserving surgery,” Mojahed et al. wrote. “Although clinicians can be trained to read OCT images, there remain practical concerns of high interobserver variability and slow speed, which make manual interpretation impractical for the intraoperative setting, therefore indicating a need for automated techniques to solve these problems.”

OCT is the “optical equivalent” of ultrasound, according to the researchers. It relies on low-energy near-infrared light and is widely used in ophthalmology. Though it has shown promise in breast imaging, manual image interpretation is difficult due to its slow speed, time to train readers, high interobserver variability and inherently complex images, the authors noted.

With this in mind, the team created a custom ultrahigh-resolution OCT (UHR-OCT) system using an 11-layer CNN designed to classify either adipose, stroma, ductal carcinoma in situ or invasive ductal carcinoma. A total of 46 tissue specimens from 23 patients were used. Of that total, 17 were normal tissue and 29 were cancerous.

Overall, the algorithm achieved 94% accuracy, 96% sensitivity and 92% specificity for binary classification of detecting cancerous versus non-cancerous tissue. It also beat out the 88% accuracy of seven clinician readers combined, the authors noted, which included radiologists, pathologists and surgeons.

Following five-fold validation, the mean F1 score was highest for invasive ductal carcinoma and adipose, followed by stroma and ductal carcinoma in situ.

“Our study demonstrates the feasibility of using Convolutional Neural Network (CNN) algorithms to classify cancer in OCT images of breast tissue and this study presents a unique A-line based classification scheme that can be used in real-time applications and extended beyond breast imaging to other applications,” the authors wrote.

""

Matt joined Chicago’s TriMed team in 2018 covering all areas of health imaging after two years reporting on the hospital field. He holds a bachelor’s in English from UIC, and enjoys a good cup of coffee and an interesting documentary.

Trimed Popup
Trimed Popup