Advanced visualization tool development aids whole-breast ultrasound efforts

Twitter icon
Facebook icon
LinkedIn icon
e-mail icon
Google icon

CHICAGO—Whole-breast ultrasound offers the potential for CT-like clarity in breast imaging without the load of ionizing radiation. Like CT, the volume of images from these exams is significant, posing a throughput issue for interpreting clinician workflow. A team of Japanese researchers, experienced with the modality, offered a set of advanced visualization tools designed to assist with the deployment of whole-breast ultrasound in clinical practice.

A team of scientists from Chunichi Hospital and the Nagoya Medical Center in Nagoya, Dokkyo Medical University in Mibu, and Gifu University in Gifu showcased their work in a pair of poster presentations at the 94th annual meeting of the Radiological Society of North America (RSNA).

In earlier work, delivered at the Society of Photo-Optical Instrumentation Engineers (SPIE) meeting last year, the developers noted that fatigue from interpreting of a large volume of breast ultrasound images can contribute to oversights of masses. Therefore, they set their sights on developing computer-aided detection (CAD) tools to assist radiologists in detecting breast masses.

The group presented a method of mammary gland analysis that automatically classifies whole breast ultrasound images into three categories based on the mammary gland patterns: mottled pattern (MP); intermediate pattern (IP); and atrophic pattern (AP).

“Classification of breast ultrasound images based on mammary gland pattern may be helpful to radiologists in the diagnosis of the whole breast ultrasound images,” they wrote.

The research team obtained experimental cases from 50 patients with a prototype whole-breast ultrasound scanner, the ASU-1004 from Aloka.

This scanner has a 6-cm linear transducer probe with frequency range of 6 to 10 MHz, and the probe moves mechanically in a water-path system. A special membrane for ultrasound is stretched on the water and the patient’s breast is imaged while they are in a prone posture on the membrane.

The probe scans an area of 16-x16-cm 2 automatically in three overlapping runs; each overlapping area is 1-cm in width. The depth interval between two consecutive views is 0.125 to 2 mm. A view is generated by integrating the three images. In the prototype setup, a whole-breast ultrasound study consists of 160 to 171 views in 1-mm slices

An experimental radiologist classified all the 50 cases into 12 MP, 24 IP, and 14 AP cases. A volume of interest (VOI) with a fixed size was defined at the nipple position in a breast. The team reported and accuracy of its proposed classification method of 82 percent.

“With respect to individual patterns, MP, IP, and AP cases were 91.7 percent (11/12), 70.8 percent (17/24), and 92.9 percent (13/14), respectively,” they noted.

A second presentation from the developers was an automated recognition system for anatomical structure in whole-breast ultrasound images, which are composed of skin, nipple, rib, fat, pectoralis and mammary gland.

“If we could extract these tissues from the images, the results could be applied to a CAD system and a computerized registration of other modality images,” they wrote.

Their preliminary work proposed methods for extraction of skin and nipples from whole-breast ultrasound images.

The team used 20 whole-breast ultrasound images acquired with the ASU-1004 as their data set. Each whole-breast ultrasound study consisted of 166 image slices; with a width and height of slice of 614 pixels and 420 pixels, respectively.

The research team’s methods for extraction of skin consisted of three steps:

1. Extraction of a region with the maximum-volume from high-density regions segmented by employing a grayscale thresholding;

2. Extraction of a skin surface using a position coordinate of each axial plane in the maximum-volume region; and

3. Extraction of skin using the skin surface.

The group developed two methods for extraction of nipples. The first method for extraction of low-density nipples consisted of three steps: the generation of an average image in coronal plane; the segmentation of a region including a nipple by employing a gray scale thresholding in the average image; and the extraction of nipples using the segmented region and the skin region.

The second method for detection of high-density nipples consisted of two steps: applying an anisotropic diffusion filter to original whole-breast ultrasound images; and segmentation of nipples by use of a watershed algorithm. They classified nipples into low- and high-densities