Participants in the Digital Mammography DREAM Challenge are trying to do their part to contribute to the nationwide goal of completing 10 years of cancer research in half the time. It’s funded under the Cancer Moonshot’s Coding4Cancer initiative—pitting coding teams against each other in a friendly fight to see who can come up with the best way to improve mammogram readings.
Vice President Joe Biden’s Cancer Moonshot goal could mean a lot of fast advances in understanding cancer and how to treat it. But even the possibility of such an achievement requires a strategic effort to get as much work done as possible in a short amount of time, such as through the Digital Mammography challenge.
One of three DREAM Challenge directors, Justin Guinney, director of computational oncology and data science at Sage Bionetworks, called the process a “collaborative competition.”
“It is people competing, but competing in a way where the end goal is to improve the current standard of care or the current model benchmark or whatever we’re trying to advance. This is one way of engaging many people at once with the right incentives in place to get them to do this all together," Guinney said.
Teams of coders, responding to online calls for sign-ups, started working through a platform called Synapse in late June on creating their own computer models for improving the accuracy with which signs of breast cancer can be detected through a mammogram.
According to the challenge’s lead radiologist and clinical advisor, Christoph Lee, MD, human radiologists can’t see up to 16 percent of cancer in mammograms. Getting a computer to fill in the gaps could both improve early detection (and, therefore, survival) in women with breast cancer and eliminate some of the unnecessary extra testing associated with false positives.
“It’s sort of this brave new world of thinking of data analytics and medicine, and seeing if we can extract even more useful clinical info out of our imaging rather than what’s seen by the eye,” Lee said.
They’ll have through December to present their best idea. The participants are in constant communication, attending webinars and sharing advancements, said Guinney.
Then the team with the best idea will be awarded a cash prize. They and other participants will begin the “community” portion of the challenge in April—an effort to combine all their learning and create an even better product.
This approach combines data aggregation, crowd-sourcing and an outside perspective to look at mammogram analysis from a different, more concentrated angle.
The DREAM Challenge provides the coders with more than 640,000 mammograms through the Breast Cancer Surveillance Consortium and the Icahn School of Medicine at Mount Sinai, according to Lee and Guinney. They’ll use the existing data to find patterns and make connections that could teach computers to recognize those gaps in human analysis. Lee explained it as machines augmenting human skills.
And more people working on one goal increases the odds of success. Plus, getting tech-oriented people to work on medical research could create a breakthrough, according to Guinney.
“People who are not acquainted with the field can really bring in fresh ideas and different perspectives that perhaps the field could really benefit from,” he said.
Guinney pointed out the DREAM Challenge creates a new incentive structure for innovation—not necessarily a better one, “just a different one”—that moves the focus from individual research publishing and toward a pooling of resources.
“There’s increasing pressure on people to act in the good of the patient rather than of their own academic aspirations,” he said.
Of course, bringing together two such different groups could cause some issues. In perfect world, said Lee, combining a clinical and technological worldview could create the most productive way to tackle the problem of misunderstood mammogram scans. But doctors don’t always understand the way the algorithms work, and the coders don’t always understand the clinical nuances of breast cancer, he said.
“It is this joint effort of two very different ways of thinking about clinical data and bringing those two pieces together into something that’s going to be useful moving forward,” Lee said.
Guinney said he sees the potential for a similar problem.
"Not everyone approaches it with very deep thinking. Sometimes people participate in the challenge who may not understand the field every well, and then therefore might take naive approaches or non-clinically relevant approaches,” he said.
Guinney said this challenge, with its hundreds of thousands of samples, is one area in which the idea of big data and biomedical research don’t always have “real meaning” to each other.
Sometimes there just isn’t enough comparable biomedical information (such as a particular type of tumor or cell) available to make big data analysis possible.
“One could talk about big data in terms of these numbers of points we can measure that are in the billions, but ultimately we’re constrained by the number of samples we have,” Guinney said.
Other DREAM (Dialogue for Reverse Engineering Assessments and Methods) Challenges don’t have that same back storage of data to examine in the same way. Also, not all DREAM Challenges focus on cancer or imaging. One current challenge, called the Respiratory Viral DREAM Challenge, is trying to identify markers that protect some people against viral respiratory infections and not others.
The first challenge opened in 2006. The organization is now partnered with Sage Bionetworks and includes various other partners for other challenges. For example, the other Digital Mammography Challenge partner organizations include the National Cancer Institute, the FDA, Group Health Research Institute and others.
The challenges can prove successful because they harness rivalries to solve problems, said Guinney.
“[The goal is] having a very high impact result for the field, rather than just having a prize and people walking away and that being the end of it,” he said.