Abstract
Classifying MR images based on their contrast mechanism can be useful in image segmentation where additional information from different contrast mechanisms can improve intensity-based segmentation and help separate the class distributions. In addition, automated processing of image type can be beneficial in archive management, image retrieval, and staff training. Different clinics and scanners have their own image labeling scheme, resulting in ambiguity when sorting images. Manual sorting of thousands of images would be a laborious task and prone to error. In this work, we used the power of transfer learning to modify pretrained residual convolution neural networks to classify MRI images based on their contrast mechanisms. Training and validation were performed on a total of 5169 images belonging to 10 different classes and from different MRI vendors and field strengths. Time for training and validation was 36 min. Testing was performed on a different data set with 2474 images. Percentage of correctly classified images (accuracy) was 99.76%. (A deeper version of the residual network was trained for 103 min and showed slightly lower accuracy of 99.68%.) In consideration of model deployment in the real world, performance on a single CPU computer was compared with GPU implementation. Highly accurate classification, training, and testing can be achieved without use of a GPU in a relatively short training time, through proper choice of a convolutional neural network and hyperparameters, making it feasible to improve accuracy by repeated training with cumulative training sets. Techniques to improve accuracy further are discussed and demonstrated. Derived heatmaps indicate areas of image used in decision making and correspond well with expert human perception. The methods used can be easily extended to other classification tasks with minimal changes.
Similar content being viewed by others
Availability of Data and Material
Requests will be considered as appropriate.
Code Availability
Available.
References
LeCun Y, Bengio Y, Hinton G: Deep learning. Nature 2015, 521(7553):436-444.
Deng J, Dong W, Socher R, Li L-J, Li K, Fei-Fei L: Imagenet: a large-scale hierarchical image database. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2009: 248--255.
Krizhevsky A, Sutskever I, Hinton GE: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems 25. edn. Edited by Pereira F, Burges CJC, Bottou L, Weinberger KQ: Curran Associates, Inc.; 2012: 1097--1105.
Simonyan K, Zisserman A: very deep convolutional networks for large-scale image recognition. In.; 2014: arXiv:1409.1556.
Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A: going deeper with convolutions. In.; 2014: arXiv:1409.4842.
He K, Zhang X, Ren S, Sun J: Deep residual learning for image recognition. In.; 2015: arXiv:1512.03385.
Chollet F: Xception: deep learning with depthwise separable convolutions. In.; 2016: arXiv:1610.02357.
Shin HC, Roth HR, Gao M, Lu L, Xu Z, Nogues I, Yao J, Mollura D, Summers RM: Deep convolutional neural networks for computer-aided detection: CNN Architectures, Dataset Characteristics and Transfer Learning. IEEE Trans Med Imaging 2016, 35(5):1285-1298.
Khan S, Islam N, Jan Z, Ud Din I, Rodrigues JJPC: A novel deep learning based framework for the detection and classification of breast cancer using transfer learning. Pattern Recognition Letters 2019, 125:1-6.
Kaur T, Gandhi T: Deep convolutional neural networks with transfer learning for automated brain image classification. Machine Vision and Applications 2020, 31.
Rassadin AG, Gruzdev AS, Savchenko AV: Group-level emotion recognition using transfer learning from face identification. In.; 2017: arXiv:1709.01688.
Peng M, Wu Z, Zhang Z, Chen T: From macro to micro expression recognition: deep learning on small datasets using transfer learning. In: 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018): 15–19 May 2018: 657–661.
Côté-Allard U, Fall CL, Campeau-Lecours A, Gosselin C, Laviolette F, Gosselin B: Transfer learning for sEMG hand gestures recognition using convolutional neural networks. In: IEEE International Conference on Systems, Man, and Cybernetics (SMC): 5–8 Oct. 2017: 1663–1668.
Maschler B, Kamm S, Jazdi N, Weyrich M: Distributed cooperative deep transfer learning for industrial image recognition. Procedia CIRP 2020, 93:437-442.
Pan H, Pang Z, Wang Y, Wang Y, Chen L: A new image recognition and classification method combining transfer learning algorithm and mobilenet model for welding defects. IEEE Access 2020, 8:119951-119960.
Cho J, Baskar MK, Li R, Wiesner M, Mallidi SH, Yalta N, Karafiát M, Watanabe S, Hori T: Multilingual sequence-to-sequence speech recognition: architecture, transfer learning, and language modeling. In: IEEE Spoken Language Technology Workshop (SLT): 18–21 Dec. 2018: 521–527.
Sargano AB, Wang X, Angelov P, Habib Z: Human action recognition using transfer learning with deep representations. In: International Joint Conference on Neural Networks (IJCNN): 14–19 May 2017: 463–469.
Kunze J, Kirsch L, Kurenkov I, Krug A, Johannsmeier J, Stober S: Transfer Learning for Speech Recognition on a Budget. In.; 2017: arXiv:1706.00290.
Feng K, Chaspari T: A review of generalizable transfer learning in automatic emotion recognition. Frontiers in Computer Science 2020, 2(9).
Houlsby N, Giurgiu A, Jastrzebski S, Morrone B, de Laroussilhe Q, Gesmundo A, Attariyan M, Gelly S: Parameter-Efficient Transfer Learning for NLP. In.; 2019: arXiv:1902.00751.
Ruder S, Peters ME, Swayamdipta S, Wolf T: Transfer learning in natural language processing. In: jun 2019; Minneapolis, Minnesota: Association for Computational Linguistics; 2019: 15–18.
Despotovic I, Goossens B, Philips W: MRI segmentation of the human brain: challenges, methods, and applications. Comput Math Methods Med 2015:450341.
Muller H, Michoux N, Bandon D, Geissbuhler A: A review of content-based image retrieval systems in medical applications-clinical benefits and future directions. Int J Med Inform 2004, 73(1):1-23.
Ke A, Ellsworth W, Banerjee O, Ng AY, Rajpurkar P: CheXtransfer: performance and parameter efficiency of ImageNet models for chest X-Ray interpretation. In: Proceedings of the Conference on Health, Inference, and Learning. edn.: Association for Computing Machinery; 2021: 116–124.
Szegedy C, Ioffe S, Vanhoucke V, Alemi AA: Inception-v4, inception-ResNet and the impact of residual connections on learning. In: Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence. San Francisco, California, USA: AAAI Press; 2017: 4278–4284.
Ranjbar S, Singleton KW, Jackson PR, Rickertsen CR, Whitmire SA, Clark-Swanson KR, Mitchell JR, Swanson KR, Hu LS: A deep convolutional neural network for annotation of magnetic resonance imaging sequence type. J Digit Imaging 2020, 33(2):439-446.
Remedios S, Pham DL, Butman JA, Roy S: Classifying magnetic resonance image modalities with convolutional neural networks. In: February 01, 2018: 105752I.
Pizarro R, Assemlal HE, De Nigris D, Elliott C, Antel S, Arnold D, Shmuel A: Using deep learning algorithms to automatically identify the brain MRI contrast: implications for managing large databases. Neuroinformatics 2019, 17(1):115-130.
Funding
National Institutes of Health.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Ethics Approval
This is an observational study with no currently intended diagnostic use and no personally identifiable information. The NIH IRB Committee confirmed that no ethical approval is required.
Consent to Participate
N.A.
Consent for Publication
N.A.
Conflict of Interest
The author declares no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Gai, N.D. Highly Efficient and Accurate Deep Learning–Based Classification of MRI Contrast on a CPU and GPU. J Digit Imaging 35, 482–495 (2022). https://doi.org/10.1007/s10278-022-00583-1
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10278-022-00583-1