Segmentation of Microscope Erythrocyte Images by CNN-Enhanced Algorithms
<p>The flowchart of segmentation of malaria parasite infected erythrocytes.</p> "> Figure 2
<p>The flowchart of the erythrocyte and leukocyte segmentation method.</p> "> Figure 3
<p>U-net architecture. The number of channels is denoted above each box.</p> "> Figure 4
<p>Diagram of proposed algorithm.</p> "> Figure 5
<p>Background subtraction: Image fragments with corresponding intensity profiles plot. Image (<b>a</b>) present results before background subtraction and (<b>b</b>) after background subtraction.</p> "> Figure 5 Cont.
<p>Background subtraction: Image fragments with corresponding intensity profiles plot. Image (<b>a</b>) present results before background subtraction and (<b>b</b>) after background subtraction.</p> "> Figure 6
<p>Example of a binary image (<b>a</b>) and its distance map (<b>b</b>).</p> "> Figure 7
<p>Original grayscale image (<b>a</b>), image after window/level transformation applied (<b>b</b>).</p> "> Figure 8
<p>Noise removal: original image (<b>a</b>), after median filter (<b>b</b>), after median and mean filters applied (<b>c</b>).</p> "> Figure 9
<p>Image after bilateral filter applied.</p> "> Figure 10
<p>Image after Otsu segmentation (<b>a</b>) and with object holes filled (<b>b</b>).</p> "> Figure 11
<p>Eroded image (<b>a</b>) and its distance map (<b>b</b>).</p> "> Figure 12
<p>Watershed segmented image (<b>a</b>) and after removing objects which are small and near to the edges presented as a binary image (<b>b</b>).</p> "> Figure 13
<p>Selected Region-of-interest(ROI) (<b>a</b>), ROI masked using neighbouring object (<b>b</b>), mask of object being considered (<b>c</b>), dilated mask of object being considered (<b>d</b>) and final combined mask (<b>e</b>).</p> "> Figure 14
<p>Otsu segmentation (<b>a</b>), segmented object (<b>b</b>), segmented object with filled holes (<b>c</b>), extracted object contour (<b>d</b>).</p> "> Figure 14 Cont.
<p>Otsu segmentation (<b>a</b>), segmented object (<b>b</b>), segmented object with filled holes (<b>c</b>), extracted object contour (<b>d</b>).</p> "> Figure 15
<p>Examples of erythrocyte contours (red), short (green) and long (blue) axis marked.</p> "> Figure 16
<p>(<b>a</b>) Probability map segmented using state of the art. approach and closeup of connected objects (<b>c</b>), authors proposal results as an overlay on the original image (<b>b</b>) and closeup of connected objects (<b>d</b>). The authors’ proposal results are slightly more precise in the task of segmentation connected objects.</p> "> Figure 17
<p>Example of a segmented cells categorized as normal, abnormal or wrongly segmented.</p> "> Figure 18
<p>Example of an output image fragment.</p> "> Figure 19
<p>Example of an artificial mock image (Gaussian smoothing radius equals 3 and noise standard deviation equals 30).</p> "> Figure 20
<p>Fragments of mock test images with Gaussian smoothing radius equal to 3—(<b>a</b>), 5—(<b>b</b>) and noise standard deviation equal to: 15—(<b>a</b>), 30—(<b>b</b>).</p> ">
Abstract
:1. Introduction
Background
2. Methods Used
Algorithm 1 Authors’ algorithm |
Input: Microscopic erythrocytes image 16-bit depth. Output: Image containing contours and axis of erythrocytes.
|
2.1. Removing Noise
2.1.1. Median Filter Application
- —calculated median of the values in the s * t area of the original image;
- —area of original image with center in point (x,y);
- —set of coordinates under mask of size m * n.
2.1.2. Application of Bilateral Filter
- —spatian domain Gaussian
- and —measures of image filtering
- I—input image
- p and q—distance parameters
2.2. Background Subtraction
2.3. Distance Map
2.4. Segmentation
2.4.1. Otsu Segmentation
2.4.2. Watershed Segmentation
3. The Proposed Methodology
3.1. Initial Processing and Segmentation
Algorithm 2 Noise Removing |
Input: Preprocessed image. Output: Noise removed image.
|
Algorithm 3 Initial Segmentation |
Input: Image with noise removed. Output: Initially segmented image.
|
Algorithm 4 Individual cell processing |
Input: Initially Segmented image. Output: Fully segmented image.
|
3.2. Individual Object Processing and Segmentation
3.3. Described Segmentation Technique as Probability Map Processor in Deep Learning Pipeline
3.4. Described Segmentation Technique Combined with Deep Learning for Results Categorization
4. Results
Results Evaluation-Comparison to State of the Art
5. Discussion
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Saeed, E.; Szymkowski, M.; Saeed, K.; Mariak, Z. An Approach to Automatic Hard Exudate Detection in Retina Color Images by a Telemedicine System Based on the d-Eye Sensor and Image Processing Algorithms. Sensors 2019, 19, 695. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Loddo, A.; Di Ruberto, C.; Kocher, M. Recent advances of malaria parasites detection systems based on mathematical morphology. Sensors 2018, 18, 513. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Voulodimos, A.; Doulamis, N.; Doulamis, A.; Protopapadakis, E. Deep Learning for Computer Vision: A Brief Review. Comput. Intell. Neurosci. 2018, 2018, 7068349. [Google Scholar] [CrossRef] [PubMed]
- Somasekar, J.; Eswara Reddy, B. Segmentation of erythrocytes infected with malaria parasites for the diagnosis using microscopy imaging. Comput. Electr. Eng. 2014, 45, 336–351. [Google Scholar] [CrossRef]
- Bergen, T.; Steckhan, D.; Wittenberg, T.; Zerfass, T. Segmentation of leukocytes and erythrocytes in blood smear images. In Proceedings of the 2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Vancouver, BC, Canada, 20–25 August 2008; pp. 3075–3078. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assigned Intervention—MICCAI 2015, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
- Garcia-Garcia, A.; Orts-Escolano, S.; Oprea, S.O.; Villena-Martinez, V.; Garcia-Rodriguez, J. A Review on Deep Learning Techniques Applied to Semantic Segmentation. arXiv 2017, arXiv:1704.06857. [Google Scholar]
- Markiewicz, T.; Swiderska-Chadaj, Z.; Gallego, J.; Bueno, G.; Grala, B.; Lorent, M. Deep learning for damaged tissue detection and segmentation in ki-67 brain tumor specimens based on the u-net model. Bull. Pol. Acad. Sci. Tech. Sci. 2018, 66, 849–856. [Google Scholar] [CrossRef]
- Fu, X.; Liu, Z.; Xiong, T.; Smaill, B.H.; Stilles, M.K.; Zhao, J. Segmentation of histological images and fibrosis identification with a convolutional neural network. Comput. Biol. Med. 2018, 98, 147–158. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Pratt, W.K. Digital Image Processing, 4th ed.; Wiley-Interscience: Hoboken, NJ, USA, 2007. [Google Scholar]
- Suhas, S.; Venugopal, C.R. Mri image preprocessing and noise removal technique using linear and nonlinear filters. In Proceedings of the 2017 International Conference on Electrical, Electronics, Communication, Computer, and Optimization Techniques (ICEECCOT), Mysuru, India, 15–16 December 2017; pp. 1–4. [Google Scholar]
- Hunnur, S.S.; Raut, A.; Kulkarni, S. Implementation of image processing for detection of brain tumors. In Proceedings of the 2017 International Conference on Computing Methodologies and Communication (ICCMC), Erode, India, 18–19 July 2017; pp. 717–722. [Google Scholar]
- Shreyamsha, K.B.K. Image denoising based on gaussian/bilateral filter and its method noise thresholding. In Signal Image and Video Processing; Springer: London, UK, 2013; pp. 1159–1172. [Google Scholar]
- Allner, S.; Koehler, T.; Fehringer, A.; Birnbacher, L.; Willner, M.; Pfeiffer, F.; Noël, P.B. Bilateral filtering using the full noise covariance matrix applied to x-ray phase-contrast computed tomography. Phys. Med. Biol. 2016, 61, 3867–3884. [Google Scholar] [CrossRef] [PubMed]
- Buczkowski, M.; Saeed, K. Fusion-based noisy image segmentation method. In Advanced Computing and Systems for Security; Volume 396 of the Series Advances in Intelligent Systems and Computing; Springer: New York, NY, USA, 2015; pp. 21–35. [Google Scholar]
- Sternberg, R.S. Biomedical image processing. IEEE Comput. Comput. Archit. Image Process. 1983, 16, 22–34. [Google Scholar] [CrossRef]
- Schindelin, J.; Arganda-Carreras, I.; Frise, E. Fiji: An open-source platform for biological-image analysis. Nat. Methods 2012, 9, 676–682. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Gonzalez, R.C.; Woods, R.E. Digital Image Processing, 3rd ed.; Pearson Prentice Hall: New York, NY, USA, 2008; Volume 61, pp. 3867–3884. [Google Scholar]
- Ping-Sung, L.; Tse-Sheng, C.; Pau-Choo, C. A fast algorithm for multilevel thresholding. J. Inf. Sci. Eng. 2008, 17, 713–727. [Google Scholar]
- Fabijańska, A. A survey of thresholding algorithms on yarn images. In Proceedings of the 2010 VIth International Conference on Perspective Technologies and Methods in MEMS Design, Lviv, Ukraine, 20–23 April 2010; pp. 23–26. [Google Scholar]
- Huang, Z.; Jiang, S.; Yang, Z.; Ding, Y.; Wang, W.; Yu, Y. Automatic multi-organ segmentation of prostate magnetic resonance images using watershed and nonsubsampled contourlet transform. Biomed. Signal Process. Control 2016, 25, 53–61. [Google Scholar] [CrossRef] [Green Version]
- Insight Toolkit (itk). Available online: https://itk.org/ (accessed on 19 July 2018).
- Lehmann, G. Label object representation and manipulation with itk. Insight J. 2007, 8, 1–31. [Google Scholar]
- Soille, P. Morphological Image Analysis Principles and Applications; Springer: Berlin/Heidelberg, Germany, 2003. [Google Scholar]
- Caicedo, J.; Goodman, A.; Karhohs, K.; Cimini, B.; Ackerman, J.; Haghighi, M.; Heng, C.; Becker, T.; Doan, M.; McQuin, C.; et al. Publisher correction: Nucleus segmentation across imaging experiments: The 2018 data science bowl. Nat. Methods 2020, 1247–1253. [Google Scholar] [CrossRef] [PubMed]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.-C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018. [Google Scholar]
- Lebedev, V.; Lempitsky, V. Speeding-up convolutional neural networks: A survey. Bull. Pol. Acad. Sci. Tech. Sci. 2018, 66, 799–810. [Google Scholar] [CrossRef]
Precision | Recall | Item Count | |
---|---|---|---|
Abnormal | 0.79 | 0.58 | 26 |
Normal | 0.70 | 0.81 | 26 |
Wrong_Segmentation | 0.72 | 0.81 | 26 |
Gaussian Smoothing Radius | 3 | 5 | ||||
---|---|---|---|---|---|---|
Noise standard deviation | 6 | 15 | 30 | 6 | 15 | 30 |
Sensitivity | 0.988 | 0.988 | 0.994 | 0.996 | 0.996 | 0.993 |
Specificity | 0.997 | 0.997 | 0.997 | 0.997 | 0.997 | 0.996 |
Precision | 0.978 | 0.980 | 0.978 | 0.981 | 0.976 | 0.968 |
Negative predictive value | 0.998 | 0.998 | 0.999 | 0.999 | 0.999 | 0.999 |
Accuracy | 0.996 | 0.996 | 0.997 | 0.997 | 0.997 | 0.995 |
Number of objects detected | 124 | 124 | 124 | 124 | 124 | 124 |
Gaussian Smoothing Radius | 3 | 5 | ||||
---|---|---|---|---|---|---|
Noise standard deviation | 6 | 15 | 30 | 6 | 15 | 30 |
Sensitivity | 0.994 | 0.995 | 0.997 | 0.999 | 0.999 | 0.999 |
Specificity | 0.999 | 0.998 | 0.996 | 0.985 | 0.982 | 0.978 |
Precision | 0.990 | 0.984 | 0.971 | 0.902 | 0.882 | 0.857 |
Negative predictive value | 0.999 | 0.999 | 0.999 | 0.999 | 0.999 | 0.999 |
Accuracy | 0.998 | 0.997 | 0.996 | 0.987 | 0.984 | 0.980 |
Number of objects detected | 124 | 124 | 124 | 124 | 124 | 124 |
Image Type | Full Size Mock Image | Cropped and Resized Images |
---|---|---|
Sensitivity | 0.994 | 0.969 |
Specificity | 0.999 | 0.990 |
Precision | 0.990 | 0.930 |
Negative predictive value | 0.999 | 0.996 |
Accuracy | 0.998 | 0.988 |
Step | Image 1 (19 objects) | Image 2 (10 objects) |
---|---|---|
Initial segmentation time [s] | 0.4 | 0.35 |
Individual cell processing time [s] | 0.85 | 0.57 |
Full Processing time (Initial segmentation with individual processing) [s] | 1.25 | 0.92 |
Stage | First Stage Baseline | Second Stage |
---|---|---|
Precision | 0.857 | 0.968 |
Number of objects detected | 124 | 124 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Buczkowski, M.; Szymkowski, P.; Saeed, K. Segmentation of Microscope Erythrocyte Images by CNN-Enhanced Algorithms. Sensors 2021, 21, 1720. https://doi.org/10.3390/s21051720
Buczkowski M, Szymkowski P, Saeed K. Segmentation of Microscope Erythrocyte Images by CNN-Enhanced Algorithms. Sensors. 2021; 21(5):1720. https://doi.org/10.3390/s21051720
Chicago/Turabian StyleBuczkowski, Mateusz, Piotr Szymkowski, and Khalid Saeed. 2021. "Segmentation of Microscope Erythrocyte Images by CNN-Enhanced Algorithms" Sensors 21, no. 5: 1720. https://doi.org/10.3390/s21051720
APA StyleBuczkowski, M., Szymkowski, P., & Saeed, K. (2021). Segmentation of Microscope Erythrocyte Images by CNN-Enhanced Algorithms. Sensors, 21(5), 1720. https://doi.org/10.3390/s21051720