Nothing Special   »   [go: up one dir, main page]

Machine Learning in Medical Imaging: Maryellen L. Giger, PHD

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

ORIGINAL ARTICLE

Machine Learning in Medical Imaging


Maryellen L. Giger, PhD

Abstract
Advances in both imaging and computers have synergistically led to a rapid rise in the potential use of artificial intelligence in various
radiological imaging tasks, such as risk assessment, detection, diagnosis, prognosis, and therapy response, as well as in multi-omics disease
discovery. A brief overview of the field is given here, allowing the reader to recognize the terminology, the various subfields, and
components of machine learning, as well as the clinical potential. Radiomics, an expansion of computer-aided diagnosis, has been
defined as the conversion of images to minable data. The ultimate benefit of quantitative radiomics is to (1) yield predictive image-based
phenotypes of disease for precision medicine or (2) yield quantitative image-based phenotypes for data mining with other -omics for
discovery (ie, imaging genomics). For deep learning in radiology to succeed, note that well-annotated large data sets are needed since
deep networks are complex, computer software and hardware are evolving constantly, and subtle differences in disease states are more
difficult to perceive than differences in everyday objects. In the future, machine learning in radiology is expected to have a substantial
clinical impact with imaging examinations being routinely obtained in clinical practice, providing an opportunity to improve decision
support in medical image interpretation. The term of note is decision support, indicating that computers will augment human decision
making, making it more effective and efficient. The clinical impact of having computers in the routine clinical practice may allow
radiologists to further integrate their knowledge with their clinical colleagues in other medical specialties and allow for precision
medicine.
Key Words: Machine learning, deep learning, radiomics, computer-aided diagnosis, computer-assisted decision support
J Am Coll Radiol 2018;15:512-520. Copyright  2018 Published by Elsevier Inc. on behalf of American College of Radiology

Advances in both imaging and computers have synergisti- at the time of interpretation (eg, clinical history, laboratory
cally led to a rapid rise in the potential use of artificial in- data, prior examinations).
telligence in various radiological imaging tasks, such as risk A brief overview of the field is given here, allowing the
assessment, detection, diagnosis, prognosis, and therapy reader to recognize the terminology, the various subfields,
response, as well as in multi-omics disease discovery. and components of machine learning, as well as the clinical
Although computer-aided detection (CADe) has been potential. Figure 1 shows the number of publication
proposed, developed, and clinically used since 1966, espe- counts in PubMed for searches on computer-aided diag-
cially in thoracic and breast imaging [1-5], the widespread nosis (CADx) in radiology, machine learning, and deep
progress in multiple clinical decision-making tasks and learning from 1972 to middle of 2017. Note that in each
multiple disease sites has only advanced in the past decades of these areas, there are numerous review publications;
with the corresponding access to large computational re- however, the aim of this article is to elucidate the concepts
sources, including computer power, storage, and digital and generalities. The range in presentation of various subtle
imaging, as well as increased electronic access to information disease states, the need for large annotated clinical data sets,
and the complex structure of many machine learning
Department of Radiology, The University of Chicago, Chicago, Illinois. methods signify much need for continued research and
Corresponding author and reprints: Maryellen L. Giger, PhD, University of development before full clinical incorporation and use.
Chicago, Department of Radiology, MC 2026, 5841 S Maryland Ave,
Chicago, IL 60637; e-mail: m-giger@uchicago.edu.
Funded in parts by NIH U01CA195564, U01CA189240, and CADe, CADx, AND DECISION SUPPORT
R01CA166945. M.L.G. is a stockholder in R2/Hologic, cofounder and equity
holder in Quantitative Insights, and shareholder in QView and receives roy- Medical image interpretation is the main undertaking of
alties from Hologic, GE Medical Systems, MEDIAN Technologies, Riverain radiologists, with the tasks requiring both good image
Medical, Mitsubishi, and Toshiba. It is the University of Chicago Conflict of quality and good image interpretation. Image interpretation
Interest Policy that investigators disclose publicly actual or potential significant
financial interest that would reasonably seem to be directly and significantly by humans is limited by the presence of structure noise
affected by the research activities. (camouflaging normal anatomical background), incomplete

ª 2018 Published by Elsevier Inc. on behalf of American College of Radiology


512 1546-1440/17/$36.00 n https://doi.org/10.1016/j.jacr.2017.12.028
4500 involving clinical, molecular, imaging, and genomic data
4000
Computer-Aided (ie, various “-omics”). Radiomics, an expansion of CADx,
Number of Paper Counts in

3500 Diagnosis
has been defined as the conversion of images to minable
3000
2500
Machine Learning
data [13-15]. Obtaining radiomic data may involve
PubMed

2000 computer segmentation of a tumor from its background


Deep Learning
1500 followed by computer extraction of various tumor
1000 features. The ultimate benefit of quantitative radiomics
500 is to (1) yield predictive image-based phenotypes of dis-
0
1960 1970 1980 1990 2000 2010 2020 ease for precision medicine or (2) yield quantitative
Year image-based phenotypes for data mining with other
Fig 1. Number of paper counts in PubMed for searches on -omics for discovery (ie, imaging genomics).
computer-aided diagnosis in radiology, machine learning, and Radiomic features can be described as handcrafted or
deep learning from 1972 to middle of 2017. engineered, with intuitive features or deep-learned fea-
tures. In this section, the focus is on handcrafted features
for which computer algorithms are developed based on
visual search patterns, fatigue, distractions, the assessment of
some analytical feature-extraction approach, such as
subtle or complex disease states, vast amounts of image data,
the calculation of geometric shape of a tumor. For
and the physical quality of the image itself.
example, Figure 2 demonstrates a computer-aided design
CADe and CADx have been under development for
or radiomics pipeline for the computer extraction of
decades [1-5]. In fact, CADe systems have already been
various characteristics of breast tumors on dynamic
commercialized and have been in clinical use since the
turn of the century [6]. In addition, over the past few
decades, various investigators have been developing
University of Chicago High-Throughput MRI Phenotyping System
image analysis methods for CADx, such as the
computer-assisted quantitative characterization of breast
4D DCE MRI ……

lesions on clinical images, as well as in the assessment of images

cancer risk [4]. Radiologist-indicated Tumor Center


There is no one-size-fits-all when it comes to com- Computerized Tumor Segmentation
puter algorithms and specific radiological interpretation
Computer-Extracted Image Phenotypes (CEIP)
tasks. Each computerized image analysis method requires
customizations specific to the task as well as the imaging Size Shape Morphology Contrast Enhancement
modality. For example, in breast cancer risk assessment,
computer-extracted characteristics of breast density or Texture Variance
Curve
breast parenchymal pattern are computed and related to CAD pipeline = radiomics pipeline
breast cancer risk factors [7-12]. CADe methods involve a
localization task and serve as a second opinion to Fig 2. Schematic flowchart of a computerized tumor pheno-
radiologists in their task of finding suspicious regions typing system for breast cancers on DCE-MRI. The computer
within images, as in screening mammograms, leaving aided diagnosis (CAD) radiomics pipeline includes computer
subsequent patient management decisions to the segmentation of the tumor from the local parenchyma and
computer-extraction of “handcrafted” radiomic features
radiologist. CADx involves the characterization of a
covering six phenotypic categories: (1) size (measuring tumor
region or tumor, initially indicated by either a dimensions), (2) shape (quantifying the 3-D geometry), (3)
radiologist or a computer, after which the computer morphology (characterizing tumor margin), (4) enhancement
characterizes the suspicious region or lesion or estimates texture (describing the heterogeneity within the texture of the
its probability of disease, again leaving the patient contrast uptake in the tumor on the first postcontrast MRIs),
management to the physician [4]. (5) kinetic curve assessment (describing the shape of the ki-
netic curve and assessing the physiologic process of the uptake
and washout of the contrast agent in the tumor during the
RADIOMICS AND IMAGING GENOMICS dynamic imaging series, and (6) enhancement-variance kinetics
(RADIOGENOMICS) (characterizing the time course of the spatial variance of the
Effective diagnosis and treatment of disease rely on the enhancement within the tumor) [16-21]. CAD ¼ computer-
integration of information from multiple patient tests aided diagnosis; DCE-MRI ¼ dynamic contrast-enhanced MRI.

Journal of the American College of Radiology 513


Giger n Machine Learning in Medical Imaging
contrast-enhanced MRI [22,23]. After the tumor is association studies, the aim is to ultimately understand
delineated from the parenchymal background (ie, the relationships between the image-based phenotypes
computer segmentation), the various radiomic features and genetics, and bring imaging findings earlier into
are calculated. The number of radiomics publications screening and treatment regimens to potentially avoid
highlighting the role of quantitative imaging biomarkers serial biopsies and provide virtual biopsies when actual
is dramatically increasing with the focus going beyond biopsies are not practical.
CADx [4,13]. These goals are to be achieved using image data from
A major focus of radiomics is cancer. Cancers are routine clinical imaging examinations. However, for
spatially heterogeneous, and currently, many imaging radiomics to progress in biomedical discovery and clinical
biomarkers of cancerous tumors include only size and prediction, sufficient harmonization is necessary for
simple enhancement measures (if dynamic imaging is clinical translation in terms of reproducibility and
employed). Various genomic studies have demonstrated repeatability. Thus, studies have been conducted with
the heterogeneity of primary breast cancer tumors [24]. focus on robustness of the imaging systems or robustness
With radiomics, the goal is to obtain image-based phe- of the radiomic features. Various initiatives have focused
notypes of the cancerous tumor including size, shape, on the aspects of quantitative and robustness, including
margin morphology, enhancement texture, kinetics, and those of the Quantitative Imaging Network of the Na-
variance kinetic phenotypes. For example, enhancement tional Cancer Institute [29] and the Quantitative Imaging
texture phenotypes characterize the tumor texture pattern Biomarker Alliance of the RSNA [30].
of contrast-enhanced tumors on the first postcontrast With such methods, investigators are phenotypically
images and thus quantitatively characterize the hetero- characterizing solid tumors to gain image-based infor-
geneous nature of contrast uptake within the breast tu- mation on the underlying genetic makeup. For example,
mor [16,17,23]. For example, the larger the enhancement in a multi-institutional National Cancer Institute
texture entropy, the more heterogeneous the pattern collaboration, which used de-identified data sets of
within the tumor, potentially reflecting the invasive breast carcinomas from The Cancer Genome
heterogeneous nature of angiogenesis and treatment Atlas and The Cancer Imaging Archive [31,32] the
susceptibility, serving as a location-specific “virtual digi- relationships between computer-extracted radiomic MRI
tal biopsy.” tumor features and various clinical, molecular, and ge-
A major gap in breast cancer research is the elucida- nomics markers of prognosis and risk of recurrence,
tion of the relationship between the macroscopic including gene expression profiles, were investigated
appearance of the tumor and its environment and bio- [23,27,28,33,34]. Statistically significant associations
logic indicators of risk, prognosis, or treatment response. were seen between quantitative MRI radiomic features
Imaging genomics (ie, “radiogenomics”) aims to find and various clinical, molecular, and genomic features in
relationships between imaging data and clinical data, breast invasive carcinoma. Promising significant trends
molecular data, genomic data, and outcome data [25-28]. were observed between enhancement texture (entropy)
During this “discovery stage,” the goal is identify and molecular subtypes (normal-like, luminal A,
associated radiomic features for later application in luminal B, HER2-enriched, basal-like), even after con-
developing predictive models for use in risk assessment, trolling for tumor size. Also discovered were some highly
screening, detection, diagnosis, prognosis, therapeutic specific imaging-genomic associations, which may be
response, risk of recurrence, and so on. potentially useful in (1) imaging-based diagnoses that can
Basically, tumors are different, so can imaging capture inform the genetic progress of tumor and (2) discovery of
the phenotypic differences and the heterogeneity within? genetic mechanisms that regulate the development of
Is it possible to decide targeted therapy based on imaging tumor phenotypes. The authors noted that the computer-
genomics association results? Can imaging features extracted MRI phenotypes show promise for high-
inform important genomics features? Can integration of throughput discrimination of breast cancer subtypes,
imaging and genomics features lead to higher power in which may yield a quantitative predictive signature for
prediction? Can imaging serve as a virtual digital biopsy, assessing prognosis.
because it is noninvasive, covers complete tumor, and is In another example, a group characterized lung tu-
repeatable? It is important to note that the intention is mors on CT through a radiomic analysis of 440 features
not to use radiomics to replace conventional biopsies quantifying tumor image intensity, shape, and texture
and genetic testing; however, from imaging genomics from 1,019 patients with lung or head-and-neck cancer

514 Journal of the American College of Radiology


Volume 15 n Number 3PB n March 2018
[35]. Using an independent data set, many of the of patients undergoing neoadjuvant therapy in an evalu-
radiomic features were shown to have prognostic power. ation using data from an ACRIN study [36].
The imaging-genomics association study noted that a
prognostic radiomic signature, characterizing tumor het-
erogeneity, seemed associated with underlying gene- MACHINE LEARNING
expression patterns. Figure 3 shows radiomics heat map Computer-extracted (radiomic) features can serve as input
from the unsupervised clustering of lung cancer patients to machine learning algorithms (ie, computer algorithms
and radiomic feature expression that revealed clusters of that “learn” a specific task given specific input data). With
patients with similar radiomic expression patterns. such machine learning methods, multiple radiomic fea-
Radiomics also allows for the use of computer- tures are merged into a single value, such as a tumor
extracted lesion features as image-based biomarkers signature, which might be related to the likelihood of
(image-based phenotypes) in predicting a patient’s disease state (eg, see Clark et al [32]).
response to a particular therapeutic treatment. For Various machine learning techniques have been
example, the functional tumor volume from breast MRI applied across the decades, for example, linear discrimi-
has been shown to a predictor of recurrence-free survival nant analysis, support vector machines, decision trees and

Fig 3. Radiomics heat map. (a) Unsupervised clustering of lung cancer patients (Lung1 set, n.422) on the y axis and radiomic
feature expression (n.440) on the x axis revealed clusters of patients with similar radiomic expression patterns. (b) Clinical
patient parameters for showing significant association of the radiomic expression patterns with primary tumor stage (T-stage;
Po1_10_20, w2 test), overall stage (P.3.4_10_3, w2 test), and histology (P.0.019, w2 test). (c) Correspondence of radiomic feature
groups with the clustered expression patterns. Reprinted with permission [35].

Journal of the American College of Radiology 515


Giger n Machine Learning in Medical Imaging
Fig 4. Examples of DCE-MRI transverse center slices with the corresponding regions of interest (ROIs) extracted. On the left is a
benign case and on the right is a malignant case. These extracted ROIs are then input to a CNN for transfer learning.
DCE ¼ dynamic contrast-enhanced MRI. (Reprinted with permission [37])

random forests, and neural networks. Reviews of machine identify the optimal signature. That is, a computer-
learning have been written over the past many years derived tumor signature needs to both perform well in
including those that serve as tutorials to new investigators its specific task and be generalizable across cases.
into the field [15,38].
Given the ever-increasing variations of computer-
extracted features, both handcrafted and deep-learned, DEEP LEARNING
appropriate feature selection techniques are important. Deep learning is a subcategory of machine learning in
Various studies have been conducted in which in- which multiple-layered networks are used to assess com-
vestigators, using moderately large data sets, have evalu- plex patterns within the raw imaging input data. Most
ated the combination of feature selection and recently, deep learning has been conducted using deep
classification methods [39-41]. Such analyses have taken convolutional neural networks (CNNs). Just as radiolo-
into account both performance (such as the area under gists learn, during residency and beyond, by repeatedly
the receiver operating characteristic curve for a correlating their visual interpretation of radiological images
particular clinical task) and variability as a way to to actual clinical truth, so can machines. Although CNNs

Medical Image Medical Image

Localization
ion of Tumor Localization
ion of Tumor

Computerized Tumor
umo Segmentation

Deep Learning
ning Algorithm
Computerized, Quantitative,
ative Analytically-Extracted (CNNs)
Tumor Features

Classifier
assi Classifier

Output for Decision


sion Support and/or Output for Decision
sion Support and/or
Discovery Discovery

Co
Combined Output for Decision
on
Support and/or Discovery

Fig 5. Schematic demonstrating the comparison of conventional hand-crafted computer-aided diagnosis and radiomic features,
convolutional neural network (CNN)-extracted features, and an ensemble technique in the task of distinguishing between lesion
type as used in Antropova et al [37] and Huynh et al [42].

516 Journal of the American College of Radiology


Volume 15 n Number 3PB n March 2018
A comprehensive technical review of deep learning in
medical image analysis is given by Shen et al [43].
Possibly the earliest journal publication of CNNs in
medical imaging was in 1994 and was for the computerized
detection of microcalcifications in mammography [44]. In
that work, a CNN was trained to create filters within a
shift-invariant artificial neural network, enabling the
enhancement of microcalcifications for further analyses
within a CADe system [44]. Other early uses of CNNs
include a study of their use in the classification of biopsy-
proven masses and normal tissue on mammograms [45].
Advances in recent years in deep learning have been
quite noteworthy, with CNNs seeing great success in
many benchmark image classification tasks [46-48].
However, to be trained, CNNs require very large and
correctly labeled data sets, as well as substantial
computational resources. Thus, implementation of deep
Fig 6. A diagonal classifier agreement plot between a con-
learning in medical decision making is occurring
volutional neural network (CNN)-based classifier and a con-
through use of pretrained CNNs (ie, “transfer
ventional computer-aided diagnosis (CADx) classifier for
FFDM in the diagnostic task of estimating the probability of learning”—with and without “fine-tuning”). Many
malignancy. The x axis denotes the output from the CNN- developments have been published demonstrating the
based classifier, and the y axis denotes the output from the role of transfer learning in radiology. Basically, training
conventional CADx classifier. Each point represents a region of CNNs “from scratch” is often not possible for CAD
interest (ROI) for which predictions were made. Points near or and other medical image interpretation tasks. However,
along the diagonal from bottom left to top right indicate high generic features can be transferred from an already-
classifier agreement; points far from the diagonal indicate low
trained CNNs (ie, pretrained; eg, a CNN trained on
agreement. ROI pictures of extreme examples of agreement
natural scenes) to serve as features for input to classifiers
and disagreement are included [37]. FFDM ¼ full field digital
mammography (Reprinted with permission [37]) focused on a medical imaging task. This process is known
as transfer learning [49-52]. For example, the use of “off-
the-shelf” CNNs pretrained on everyday objects, such as
have been used in CADe for decades, advances in com- cats and dogs, can be used to characterize tumors on
puters have allowed for a dramatic increase in the number breast images [37,42] transferring knowledge from
of layers within the CNN, thus resulting in the term deep. general object recognition tasks to medical imaging

Fig 7. Receiver operating characteristic curves showing statistically significant improvement in diagnostic classification of breast
lesions on FFDM, ultrasound, and breast MRI when output from conventional CADx and deep learning are combined [37]. AUC,
area under the curve; CAD, computer-aided diagnosis; CNN, convolutional neural network; DCE ¼ dynamic contrast-enhanced
MRI; FFDM ¼ full field digital mammography; US ¼ ultrasound. Reprinted with permission [37].

Journal of the American College of Radiology 517


Giger n Machine Learning in Medical Imaging
classification tasks. In addition, fine-tuning of trained however, there are methods to assess the learned param-
CNNs is occurring in which investigators use only a eters within a CNN to understand its decision-making
portion of a CNN trained for a different task and retrain focus and methodology.
the later layers of a CNN specifically for new task. These In the future, to cover the entirety of radiology,
methods allow for harnessing the predictive power of there are challenges and potential pitfalls. For deep
deep neural networks without the need for extremely learning in radiology to succeed, recall that appropri-
large data set or computational cost requirements. ately annotated large data sets are needed, deep net-
For example, transfer learning has been successfully works are complex, computer software and hardware are
used in the diagnosis of breast tumors on mammog- evolving constantly, and subtle differences in disease
raphy, ultrasound, and breast MRI [37,42]. In a breast states are more difficult to perceive than differences in
imaging CADx system, deep neural networks with everyday objects.
transfer learning were used, and specific layers of the However, in the future, machine learning in radiology
CNN served as features for subsequent classifiers. is expected to have a substantial clinical impact with
Three scenarios were evaluated: a CADx system with imaging examinations being routinely obtained in clinical
computer-extracted handcrafted features, a CADx sys- practice, providing an opportunity to improve decision
tem with CNN-extracted features, and an ensemble support in medical image interpretation. The term of
classifier trained on both types of features. Across all note is decision support, indicating that computers will
three breast imaging modalities, the ensemble classifier augment human decision making, making it more
performed best, indicating the potential for the effective and efficient. The clinical impact of having
complementary use of both handcrafted and deep- computers in the routine clinical practice may allow ra-
learned tumor features in medical decision making diologists to further integrate their knowledge with their
(Figs. 4, 5, 6, 7). clinical colleagues in other medical specialties and allow
CNNs have been investigated for multiple tasks for precision medicine.
including the detection of colonic polyps on CT colo-
nography as shown in Figure 1 of reference [53] and
detecting patterns of interstitial lung disease on CT [54]. TAKE-HOME POINTS
In another example, a CNN for detection was trained
- Advances in both imaging and computers have
on digital mammography but then transfer learning was
synergistically led to a rapid rise in the potential use
conducted to allow the image patterns learned from
of artificial intelligence in various radiological im-
mammograms to be transferred to the analysis of breast
aging tasks.
tomosynthesis images, indicating the ability to transfer
- Radiomics, the -omics of images, is an expansion of
between radiological modalities [55].
CADx.
Although deep learning allows for computers to learn
- Machine learning enables the use of radiomics in
directly from image data, for each clinical task, millions of
computer-learned tumor signatures.
images are needed for CNNs to be trained “from
- Deep learning, a subcategory of machine learning,
scratch.” Such an example is that of detection of diabetic
allows computers to learn directly from image data;
retinopathy in retinal fundus photographs [56].
however, for each clinical task, millions of images
are expected to be needed for CNNs to be trained
“from scratch.”
DISCUSSION AND SUMMARY
- Although many machine learning imaging publi-
Although many machine learning imaging publications
cations are presented and published each year, there
are presented and published each year, there are still only
are still only a few methods that are able to handle
a few methods that are able to handle the vast range of
the vast range of radiological presentations of subtle
radiological presentations of subtle disease states. For
disease states. The range in presentation of various
example, the use of CNNs to distinguish trabecular bone
subtle disease states, the needs for large annotated
structure or interstitial lung diseases involves subtle
clinical data sets, and the complex structure of many
changes in texture-type patterns, which are quite different
machine learning methods signify much need for
from everyday photos of cats and dogs.
continued research and development before full
The use of deep learning terminology has also caused
clinical incorporation and use.
concern in the use of a “black box” for medical tasks;

518 Journal of the American College of Radiology


Volume 15 n Number 3PB n March 2018
REFERENCES TCGA/TCIA data set. NPJ Breast Cancer 2016:2. pii: 16012; Epub
1. Lodwick GS. Computer-aided diagnosis in radiology. A research plan. May 11, 2016.
Invest Radiol 1996;1:72-80. 24. Parker JS, Perou CM. Tumor heterogeneity: focus on the leaves, the
2. Giger ML. Computerized image analysis in breast cancer detection and trees, or the forest? Cancer Cell Previews 2015;28:149-50.
diagnosis. Sem Breast Dis 2002;5:199-210. 25. Li H, Giger ML, Sun C, et al. Pilot study demonstrating association
3. Giger ML, Chan H-P, Boone J. Anniversary paper: history and status between breast cancer image-based risk phenotypes and genomics
of CAD and quantitative image analysis: the role of medical physics biomarkers. Med Phys 2014;41:031917.
and AAPM. Med Phys 2008;35:5799-820. 26. Gierach GL, Li H, Loud JT, et al. Relationships between computer-
4. Giger ML, Karssemeijer N, Schnabel J. Breast image analysis for risk extracted mammographic texture pattern features and BRCA1/2 mu-
assessment, detection, diagnosis, and treatment of cancer. Annu Rev tation status: a cross-sectional study. Breast Cancer Res 2014;23:424.
Biomed Eng 2013;15:327-57. 27. Zhu Y, Li H, Guo W. Deciphering genomic underpinnings of
5. Rao VM, Levin DC, Parker L, Cavanaugh B, Frangos AJ, Sunshine JH. quantitative MRI-based radiomic phenotypes of invasive breast carci-
How widely is computer-aided detection used in screening and diag- noma. Sci Rep 2015;5:17787.
nostic mammography? J Am Coll Radiol 2010;7:802-5. 28. Guo W, Li H, Zhu Y, et al. Prediction of clinical phenotypes in
6. Freer TW, Ulissey MJ. Screening mammography with computer-aided invasive breast carcinomas from the integration of radiomics and ge-
detection: prospective study of 12,860 patients in a community breast nomics data. J Med Imaging (Bellingham) 2015;2:041007.
center. Radiology 2001;220:781-6. 29. National Cancer Institute. Quantitative imaging network (QIN).
7. Alonzo-Proulx O, Packard N, Boone JM, et al. Validation of a method Available at: https://imaging.cancer.gov/programs_resources/specialized_
for measuring the volumetric breast density from digital mammo- initiatives/qin.htm. Accessed January 1, 2015.
grams. Phys Med Biol 2010;55:3027-44. 30. RSNA. Quantitative Imaging Biomarkers Alliance (QIBA).
8. van Engeland S, Snoeren PR, Huisman H, Boetes C, Karssemeijer N. Available at: https://www.rsna.org/qiba/. Accessed January 25, 2018.
Volumetric breast density estimation from full-field digital mammo- 31. Cancer Genome Atlas Network. Comprehensive molecular portraits of
grams. IEEE Trans Med Imaging 2006;25:273-82. human breast tumours. Nature 2012;490:61.
9. Huo Z, Giger M, Olopade O, Wolverton D, Weber B, et al. 32. Clark K, Bruce V, Smith K, et al. The Cancer Imaging Archive
Computerized analysis of digitized mammograms of BRCA1 and (TCIA): maintaining and operating a public information repository.
BRCA2 gene mutation carriers. Radiology 2002;225:519-26. J Digit Imaging 2013;26:1045.
10. Manduca A, Carston M, Heine J, et al. Texture features from 33. Burnside E, Drukker K, Li H, et al. Using computer-extracted image
mammographic images and risk of breast cancer. Cancer Epidemiol phenotypes from tumors on breast MRI to predict breast cancer
Biomarkers Prev 2009;18:837-45. pathologic stage. Cancer 2016;122:748-57.
11. Nielsen M, Karemore G, Loog M, et al. A novel and automatic 34. Li H, Zhu Y, Burnside ES, et al. MRI radiomics signatures for pre-
mammographic texture resemblance marker is an independent risk dicting the risk of breast cancer recurrence as given by research versions
factor for breast cancer. Cancer Epidemiol 2011;35:381-7. of gene assays of MammaPrint, Oncotype DX, and PAM50. Radiology
12. Li H, Giger M, Olopade O, Lan L. Fractal analysis of mammographic 2016;281:382-91.
parenchymal patterns in breast cancer risk assessment. Acad Radiol 35. Aerts HJ, Velazquez ER, Leijenaar RT, et al. Decoding tumour
2007;14:513-21. phenotype by noninvasive imaging using a quantitative radiomics
13. Limkin EJ, Sun R, Dercle L, et al. Promises and challenges for the approach. Nat Commun 2014;5:4006.
implementation of computational medical imaging (radiomics) in 36. Hylton NM, Gatsonis CA, Rosen MA, et al. Neoadjuvant chemo-
oncology. Ann Oncol 2017;28:1191-206. therapy for breast cancer: functional tumor volume by MR imaging
14. Gillies RJ, Kinahan PE, Hricak H. Radiomics: images are more than predicts recurrence-free survival-results from the ACRIN 6657/
pictures, they are data. Radiology 2016;278:563-77. CALGB 150007 I-SPY 1 trial. Radiology 2016;279:44-55.
15. Avanzo M, Stancanello J, El Naqa I. Beyond imaging: the promise of 37. Antropova N, Huynh BQ, Giger ML. A deep fusion methodology for
radiomics. Phys Med 2017;38:122-39. breast cancer diagnosis demonstrated on three imaging modality
16. Chen W, Giger ML, Bick U, Newstead G. Automatic identification datasets. Med Phys 2017;44:5162-71.
and classification of characteristic kinetic curves of breast lesions on 38. Erickson BJ, Korfiatis P, Akkus Z, Kline TL. Machine learning for
DCE-MRI. Med Phys 2006;33:2878-87. medical imaging. Radiographics 2017;37:505-15.
17. Chen W, Giger ML, Li H, Bick U, Newstead G. Volumetric texture 39. Jamieson A, Giger ML, Drukker K, Li H, Yuan Y, Bhooshan N.
analysis of breast lesions on contrast-enhanced magnetic resonance Exploring non-linear feature space dimension reduction and data
images. Magn Reson Med 2007;58:562-71. representation in breast CADx with Laplacian eigenmaps and t-SNE.
18. Chen W, Giger ML, Bick U. A fuzzy c-means (FCM) based approach Med Phys 2010;37:339-51.
for computerized segmentation of breast lesions in contrast-enhanced 40. Jamieson AR, Giger ML, Drukker K, Pesce L. Enhancement of breast
MR images. Acad Radiol 2006;13:63-72. CADx with unlabeled data. Med Phys 2010;37:4155-72.
19. Gilhuijs KGA, Giger ML, Bick U. Automated analysis of breast lesions 41. Parma C, Grossmann P, Bussink J, Lambin P, Aerts H. Machine
in three dimensions using dynamic magnetic resonance imaging. Med learning methods for quantitative radiomic biomarkers. Sci Rep
Phys 1998;25:1647-54. 2015;5:13087.
20. Chen W, Giger ML, Lan L, Bick U. Computerized interpretation of 42. Huynh B, Li H, Giger ML. Digital mammographic tumor classifica-
breast MRI: investigation of enhancement-variance dynamics. Med tion using transfer learning from deep convolutional neural networks.
Phys 2004;31:1076-82. J Med Imaging 2016;3:034501.
21. Bhooshan N, Giger ML, Jansen S, Li H, Lan L, Newstead G. 43. Shen D, Wu G, Suk HI. Deep learning in medical image analysis.
Cancerous breast lesions on dynamic contrast-enhanced MR images: Annu Rev Biomed Eng 2017;19:221-48.
computerized characterization for image-based prognostic markers. 44. Zhang W, Doi K, Giger ML, Wu Y, Nishikawa RM, Schmidt RA.
Radiology 2010;254:680-90. Computerized detection of clustered microcalcifications in digital
22. Chen W, Giger ML, Newstead GM, et al. Computerized assessment of mammograms using a shift-invariant artificial neural network. Med
breast lesion malignancy using DCE-MRI: robustness study on two Phys 1994;21:517-24.
independent clinical datasets from two manufacturers. Acad Radiol 45. Sahiner B, Chan HP, Petrick N, et al. Classification of mass and
2010;17:822-9. normal breast tissue: a convolution neural network classifier with
23. Li H, Zhu Y, Burnside ES, et al. Quantitative MRI radiomics in the spatial domain and texture images. IEEE Trans Med Imaging 1996;15:
prediction of molecular classifications of breast cancer subtypes in the 598-610.

Journal of the American College of Radiology 519


Giger n Machine Learning in Medical Imaging
46. Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with 52. Donahue J, Jia Y, Vinyals O, et al. Decaf: a deep convolutional acti-
deep convolutional neural networks. Adv Neural Inf Process Syst vation feature for generic visual recognition. arXiv preprint arXiv:1310.
2012;25:1097-105. 1531. 2013.
47. Simonyan K, Zisserman A. Very deep convolutional networks for 53. Roth H, Lu L, Liu J, et al. Improving computer-aided detection using
large-scale image recognition. arXiv preprint arXiv:1409.1556. convolutional neural networks and random view aggregation. IEEE
2014. Trans Med Imaging 2016;35:1170-81.
48. Szegedy C, Liu W, Jia Y, et al. Going deeper with convolutions. CoRR 54. Anthimopoulos M, Christodoulidis S, Christe A, Mougiakakou S. Lung
abs/1409.4842. 2014. pattern classification for interstitial lung diseases using a deep convolu-
49. Pan SJ, Yang Q. A survey on transfer learning. IEEE Trans Knowl tional neural network. IEEE Trans Med Imaging 2016;35:1207-16.
Data Eng 2010;22:1345-9. 55. Samala R, Chan H-P, Hadjiiski L, Helvie MA, Wei J, Cha K. Mass
50. Yosinski J, Clune J, Bengio Y, et al. How transferable are features in detection in digital breast tomosynthesis: deep convolutional neural
deep neural networks? CoRR abs/1411.1792 2014. network with transfer learning from mammography. Med Phys
51. Razavian AS, Azizpour H, Sullivan J, et al. CNN features off-the-shelf: 2016;43:6654-66.
an astounding baseline for recognition. In: 2014 IEEE Conference on 56. Gulshan V, Peng L, Coram M, Stumpe MC, et al. Development and
Computer Vision and Pattern Recognition Workshops (CVPRW). validation of a deep learning algorithm for detection of diabetic reti-
IEEE; 2014:512-9. nopathy in retinal fundus photographs. JAMA 2016;316:2402-10.

520 Journal of the American College of Radiology


Volume 15 n Number 3PB n March 2018

You might also like