Deep learning can identify cancerous and precancerous esophagus tissue on digitized pathology slides, opening the door for AI to change the digital pathology landscape.
The most common deep learning approach for classifying microscopy images requires pathologists to annotate regions of interest on whole slides in order to train the platform's image classifier, also known as the “sliding window model.” This, however, is difficult even for the best pathologists, explained Jason Wei, with Dartmouth College’s school of medicine, and colleagues.
The researchers described their new approach, which mimics the way pathologists examine slides under the microscope, to detect esophageal adenocarcinoma (EAC) and Barrett esophagus (BE). Patients with the latter face an up to a 125-times higher risk of cancer.
“To our knowledge, the model is the first to automate the detection of BE and EAC on histopathological slides using a deep learning approach,” researchers wrote Nov. 7 in JAMA Network Open.
“This new model is expected to open avenues for applying deep learning to digital pathology.”
To train their model, the team collected 379 deidentified, high-resolution histological images from patients who underwent endoscopic esophagus and gastroesophageal junction mucosal biopsy between January 2016 and December 2018.
After evaluating the deep learning model on an independent set of 123 digital slides, it performed “as well or better” than the current gold standard: the sliding-window approach.
“Previous methods for analyzing microscopy images were limited by bounding box annotations and unscalable heuristics,” the researchers concluded. “The model presented here was trained end to end with labels only at the tissue level, thus removing the need for high-cost data annotation and creating new opportunities for applying deep learning in digital pathology.”