Nothing Special   »   [go: up one dir, main page]

Digital mammographic tumor classification using transfer learning from deep convolutional neural networks

J Med Imaging (Bellingham). 2016 Jul;3(3):034501. doi: 10.1117/1.JMI.3.3.034501. Epub 2016 Aug 22.

Abstract

Convolutional neural networks (CNNs) show potential for computer-aided diagnosis (CADx) by learning features directly from the image data instead of using analytically extracted features. However, CNNs are difficult to train from scratch for medical images due to small sample sizes and variations in tumor presentations. Instead, transfer learning can be used to extract tumor information from medical images via CNNs originally pretrained for nonmedical tasks, alleviating the need for large datasets. Our database includes 219 breast lesions (607 full-field digital mammographic images). We compared support vector machine classifiers based on the CNN-extracted image features and our prior computer-extracted tumor features in the task of distinguishing between benign and malignant breast lesions. Five-fold cross validation (by lesion) was conducted with the area under the receiver operating characteristic (ROC) curve as the performance metric. Results show that classifiers based on CNN-extracted features (with transfer learning) perform comparably to those using analytically extracted features [area under the ROC curve [Formula: see text]]. Further, the performance of ensemble classifiers based on both types was significantly better than that of either classifier type alone ([Formula: see text] versus 0.81, [Formula: see text]). We conclude that transfer learning can improve current CADx methods while also providing standalone classifiers without large datasets, facilitating machine-learning methods in radiomics and precision medicine.

Keywords: computer-aided diagnosis; convolutional neural networks; deep learning; mammography; precision medicine; radiomics; transfer learning.