Guess who? Identifying patients from surface-rendered images difficult

Twitter icon
Facebook icon
LinkedIn icon
e-mail icon
Google icon
 - Surface_rendered_CT
Three-dimensional reconstructed image of a 69-year-old man who underwent CT of maxillofacial sinuses.
Source: (Am J of Roentgen 2014;202:1267-1271)

Successful identification of patients using surface-rendered faces is possible, but may be a relatively difficult task for observers, according to a study published in the June issue of the American Journal of Roentgenology.

Although 3D and multiplanar reconstruction of CT images have become common in diagnostic imaging, potential problems may arise regarding high-resolution reconstructions from isotropic or near-isotropic datasets because they can create detailed images of a patient’s face. “This capability raises privacy concerns because the images could be used to identify a patient despite de-identification or anonymization of the patient’s protected health information,” wrote the study’s lead author, Joseph Jen-Sho Chen, MD, of the University of Maryland School of Medicine in Baltimore, and colleagues.

As facial recognition technology evolves, the worry that some may use innovations to intrusively or maliciously identify patients is real. This would be in direct violation of patient privacy and confidentiality rights secured by HIPAA. Chen and colleagues sought to assess whether volunteer viewers can recognize faces on 3D reconstructed images as specific patients.

The study group included 328 participants, 29 of whom underwent clinically indicated CT of the maxillofacial sinuses or cerebral vasculature and were also photographed. This subset of patients was labeled as group A, while 150 patients who volunteered to have their faces photographed were group B. Surface-reconstructed 3D images of group A were generated from CT data and digital photographs of groups A and B were acquired for a total of 179 photos.

The images were reviewed by 149 observers, who were given a web-based questionnaire that asked observers to match surface-reconstructed images generated from CT data with randomized digital photographs from the 179 images. The researchers performed data analyses to determine the observers’ ability in successfully matching surface-reconstructed images with facial photos.

The overall accuracy of the image observers was 61 percent. The sensitivity, which was defined as when an image reviewer correctly matched the photo with the reconstructed image, was 88 percent. The specificity, or when an image reviewer correctly chose none of the options when the reconstructed image did not match any of the randomly displayed photos, was 50 percent.

Though the association wasn’t strong, Chen et al observed that the accuracy of identifying the matches declined as the age of the image reviewers increased.

“Our data suggest that 3D-rendered CT images may be difficult for most people, medically trained or not, to match with known faces without simultaneously seeing that person or his or her photograph,” wrote the authors. “Our research findings could be interpreted as simultaneously supporting the suggestion that remarkably lifelike surface-reconstructed images can be successfully matched with patient photographs but also suggesting that this task of identification can be quite difficult without familiar cues such as hair, skin color and markings, and the differences in the patient's face when in the supine position for CT.”

The researchers suggest the need for future studies that investigate sophisticated software designed to perform human facial recognition with more objective information and its potential ability to outperform human observers.