Nothing Special   »   [go: up one dir, main page]

Skip to content
BY 4.0 license Open Access Published by De Gruyter March 23, 2022

Trainable watershed-based model for cornea endothelial cell segmentation

  • Ahmed Saifullah Sami EMAIL logo and Mohd Shafry Mohd Rahim

Abstract

Segmentation of the medical image plays a significant role when it comes to diagnosis using computer aided system. This article focuses on the human corneal endothelium’s health, which is one of the filed research interests, especially in the human cornea. Various pathological environments fasten the extermination of the endothelial cells, which in turn decreases the cell density in an abnormal manner. Dead cells worsen the hexagonal design. The mutilated endothelial cells can no longer revive back and that gives room for neighbouring cells to migrate and expand so that they can fill in the space. The latter results in cell elongation that is unpredictable as well as increase in size and thinning. Cell density and shape are therefore considered major parameters when it comes to explaining the health condition attributed to corneal endothelium. In this study, medical feature extraction was obtained depending on the segmentation of the endothelial cell boundary, and the task of segmentation of such objects especially the thin, transparent, and unclear cell boundary is considered challenging due to the nature of the image capture during endothelium layer examination by ophthalmologists using confocal or specular microscopy. The resulting image suffers from various issues that affect the quality of the image. Low quality is due to non-uniformity of illumination and the presence of a lot of noise and artefacts resulting from high amounts of distortion, and most of these limitations are present because of the nature of the imaging modality. Usually, images contain certain kind of noise and also continuous shadow. Furthermore, the cells are separated by poor border, thereby leading to great difficulty in the segmentation of the images. The irregular shape of cell and also the contrast of such images seem to be low as they possess blurry boundaries with diverse objects existing in addition to the lack of homogeneity. The main aim of the study is to propose and develop a totally automatic, robust, and real-time model for the segmentation of endothelial cells of the human cornea obtained by in vivo microscopy and computation of different clinical features of endothelial cells. To achieve the aim of this study a new scheme of image enhancement was proposed such as the Contrast-Limited Adaptive Histogram Equalisation (CLAHE) technique to enhance contrast. After that, a new image denoising technique called Wavelet Transform Filter and Butterworth Bandpass for Segmentation is used. Subsequently, brightness level correction is applied by using the moving average filter and the CLAHE to reduce the effects of the non-uniform image lighting produced as a result of the previous step. The main aim of this article is the segmentation of endothelial cells, which involves precise detection of the endothelial contours. So a new segmentation model was proposed such that the shape of the cells will be extracted, and the contours were highlighted. This stage is followed by clinical feature extraction and uses the features for diagnosis. In this stage, several relevant clinical features such as pleomorphism mean cell perimeter, mean cell density, mean cell area, and polymegathism are extracted. The role of these clinical features is crucial for the early detection of corneal pathologies as well as the evaluation of the health of the corneal endothelium layer. The findings of this study were promising.

1 Introduction

Medical imaging includes the technologies that are used in viewing the human body with the aim of monitoring, diagnosing, or treating medical conditions. It is basically aimed at obtaining an image of the internal structure of the body in a manner that is non-intrusive as possible. Medical imaging has emerged as one of the commonly used methods of laboratory test that is going through changes in the past decade. There has been a rapid advancement in this area, thereby leading to the development of more accurate and less intrusive devices. The region of interest (ROI) is often segmented manually by a properly trained expert. When manual segmentation is done, multiple subjective measurement decisions could be involved, and such decisions may cause an increase in the probability of intra- and inter-observer flaws. When such errors occur in terms of judging endothelial cells, the consequences can be severe in positions of missed chances (false negatives) and false anxieties (false positives). It has been stated by some medical practitioners that raising false alarm due to erroneous judgement is highly unacceptable. Thus, it is crucial to develop automatic solutions to facilitate speedy analysis and minimise the problems of intra- and inter-observer variation [1]. Hence, now three parameters are measured while evaluating the health ranking of endothelium. Similarly, indicates that today three parameters are applied when evaluating the health ranking of endothelium. The parameters are polymegethism which is also termed as cell variation, pleomorphism also known as hexagonally and endothelial cell density. Various approaches have been used to separate every cell found in corneal endothelium’s image, which gave accurate results. Getting cell contours that are reliable needs manual delineation of the cell boundaries because there are a lot of endothelial cells in every square millimetre and segmenting them manually has proven to be an activity that consumes a lot of time.

Additionally, arrangement of cells in “corneal endothelium” is quite important for the ophthalmologists since it gives essential diagnostic information concerning the status of the cornea health and signs of any disease [2]. Notably, it has been several decades since the first corneal image was recorded, but yet there is a lack of debatable precise fully computerised means that help in calculating the cell borders and successfully performing assessments and quantitative assessments of the characteristics. Notable inter- and intra-observer disparities can still be seen. Previous studies [3,4] confirmed that there are a number of tools that can be used to assess the density of the cell and endothelium’s morphometry. Both non-contact specular and confocal microscopes give quality images from the peripheral and central cornea. Besides, another study by Salvetat et al. [5] states that the non-contact confocal microscopy is the current modality which gives the same quality as much as like the other microscopes and generates a huge field of view. This means that the extraction of medical features by image processing provides tremendous assistance for correct diagnosis of the cornea health. It increases the accuracy and saves time. Analysis of the said parameters can also be brought out spontaneously by use of a diagnosis model that is computer aided. The model should also be fully automatic while capturing the image using a medical tool and also during the examination by an optometrist (Figure 1).

Figure 1 
               The anatomy of the human eye and the cornea (a) section of the frontal part of the human eye, (b) six layers from the anterior to the posterior cornea, and (c) in vivo corneal confocal microscopy image of the corneal endothelium layer.
Figure 1

The anatomy of the human eye and the cornea (a) section of the frontal part of the human eye, (b) six layers from the anterior to the posterior cornea, and (c) in vivo corneal confocal microscopy image of the corneal endothelium layer.

Huang et al. [6] depicted that physical representation of cells is a task that is quite labour-intensive. The performance of the software provided by microscope manufacturers for segmenting the cells is insubstantial. Such integrated software has indicated the erroneousness of the automated analyses when compared with expert commentary and that calls for a model that is fully automatic. The article focuses on creating such a model through development and enhancement of image processing techniques so that the current challenges that exist in measurement and segmentation of cornea endothelial cells can be dealt with.

The main aim of this work is to propose and develop a totally automatic, robust, and real-time model for the segmentation of endothelial cells of the human cornea obtained by in vivo microscopy and computation of the different clinical features of endothelial cells, and this research also aims to improve the visual quality of the images by reducing their unwanted degradations and enhancing their poor contrast. Improvements in quality can be achieved by using image enhancement methods. A pre-processing scheme of methods has been proposed in this research to obtain a decent image quality to highlight cell border. Furthermore, segmentation method was proposed to achieve accurate and precise cell segmentation, all these methods will serve the purpose of clinical feature extraction which will be used by expert for better diagnosis of the medical condition of endothelium layer. The main contributions of this study are as follows:

  • A fully automatic, robust, and real-time model for the segmentation of endothelial cells of the human cornea obtained by in vivo microscopy and computation of the different clinical features of endothelial cells.

  • A pre-processing scheme of methods has been proposed in this research to obtain a decent image quality to highlight cell border. Furthermore, segmentation method was proposed to achieve accurate and precise cell segmentation.

  • To improve the visual quality of the images by reducing their unwanted degradations and enhancing their poor contrast.

The rest of this study is organised as follows. In Section 2, related work on corneal endothelium enhancement and segmentation is presented. The materials and methods that are used in this study such as image dataset and segmentation techniques are discussed in Section 3. Section 4 provides the results of corneal endothelium segmentation. Finally, the conclusion and future work are presented in Section 5.

2 Related work

Much of the current study is attributed to division of the cell that attempts to come up with a model that is fully automatic and one that will cater for detection of cells and quality of the image. That is because of image’s intensity and many numbers of ROI. An example can be found in the previous study of Nadachi and Nunokawa [7], who used morphological thinning and scissoring to rectify the medical features. Lost boundaries are then edited physically, while a ref. [8] work histogram was derived from calculating cell size and the number of neighbours for every cell. The derivation gave quantitative information pertaining the cornea heath. The histogram resulted from using a dome extractor in marking cell edges and applied marker-driven watershed segmentation to get binary images. Both were semi-automatic, which needs manual editing to complete segmentation. In order to deal with the challenge in refs [9] and [10], a proposal for constraining the watershed segmentation through the distance map was made. A slightly contrasting method was suggested by Bullet et al. [11], who came up with watersheds on the map and divided the fused cells by using Voronoi diagrams. Nevertheless, as it can be seen in ref. [12], the methods are receptive to the setting of the parameter and therefore requires research before the prime results are derived. Arguably, Selig et al. [13] have come up with a proposal of using stochastic watershed so as to avoid the interaction between the user, change of parameters, and the empirical setting. However, Dagher and El Tom [14] made use of the watershed contours in initialising many balloon snakes. A comparable method was suggested by Charłampowicz et al. [15], in which various active contours for snakes continue to evolve from circular sections derived through thresholding. Foracchia and Ruggeri [16] and Ruggeri et al. [17] have taken advantage of shape modelling technology using the prior knowledge incorporated into the Bayesian analysis framework [18]. This approach is based on using neural networks to make classification of the cells in the cell body, marking every pixel as the cell vertex or the body by using vector machines [19], and growing a number of vertices and hence is coming up with normal hexagons into the boundaries of the cell by the use of genetic algorithm [20]. The researcher seeks to develop an accurate, reliable, and fully automatic model capable of segmenting the endothelium cell. The researcher also tackles some significant issue that was a severe challenge to achieve their goal. They impose during their work to solve the problems such as the artefact that the microscopy may produce during the acquisition, which includes noise, bluer, and uneven illumination, especially at the border of the image due to the nature of the cornea endothelium layer, besides, the mechanism of capturing and the reflection of light.

Most of the studies start to enhance images before the segmentation phase using a sophisticated pre-processing tier and scheme, which significantly influences segmentation accuracy. In some models, post-processing also was required. One of the dedicated pre-processing models was introduced by Khan et al. [21], which involves using the bandpass filter to the image’s input. An illumination that is not uniform is seen when the content having low frequency is dealt with through the lower region of the sub-band. Also Sharif et al. [22] noted that the noise of high frequency is taken care of by the band pass’s upper sub-band section. During his analysis of stat of the artworks of others, almost every study concerning endothelial cell segmentation consists of first-processing treatment followed by banalisation just before the segmentation phase. Many researcher utilized various, post-processing method to overcome the unwanted result from segmentation process such over-segmentation or under segmentation or disconnected marker the determined cell boundary which effect feature analysis and extraction as an example the study [23].

Furthermore, it is done by applying biomarker estimation from edge images, using Fourier analysis, and finally using characteristic SD. To improve edges of the cell, image enhancement was performed by authors due to the presence of specific artefact in the image obtained by confocal or specular microscopy used in accession of medical and biomedical facts lead to segmentation issues that are more profound. They include the divergent noises that are associated such as Poissonian, Rician, Speckle, and Gaussian noise [24]. The noise presence in such images such as Gaussian noise is one of the common noises encountered, also Poisson noise was found to be in confocal microscope as a result of complex appearances of the cell. Subsequently, Sheppard et al. [25] presented the sources of noise as end results of the size of the pinhole, form of detection, and the imaging data on the ratio of noise to signal. Also, Choraś [26] reported that the another artefact that affects the quality of images is the uneven illumination due to light. The problem of distribution of brightness was treated by adjusting the brightness levels in rows and columns. More investigations are presented in chapter 2 about the methods and techniques used to solve such artefact, which is combined with noise and dark edges of the images.

3 Materials and methods

3.1 Dataset used

Images of corneal endothelium from 30 eyes with alizarine were acquired with an optical microscope and saved as grey-level digital image and the corresponding manually segmented images [17]: 30 images of corneal endothelium and corresponding 30 manually segmented images. Acquisition instrument used was inverse phase contrast microscope (CK 40, Olympus) at 200× magnification and analogue camera (SSC-DC50AP, Sony) with mean area assessed per cornea: 0.54 ± 0.07 mm2 (range 0.31–0.64 mm2). Image format was JPEG compressed, 768 × 576 pixel monochrome digital images. Source of this dataset was Department of Ophthalmology, Charité-Universitätsmedizin Berlin, Campus Virchow-Klinikum, Berlin, Germany (Figure 2).

Figure 2 
                  Image dataset examples of specular microscopy.
Figure 2

Image dataset examples of specular microscopy.

3.2 Pre-processing

The first stage where the focus is to enhance the image concerning its brightness, reduce the intensity of darker areas, remove noise by highlighting the ROI border, and finally smoothening the image using contrast balancing. The Contrast-Limited Adaptive Histogram Equalisation (CLAHE) technique, as depicted in Figure 3, is the first-stage processing technique required for enhancing the input that contains the image of the endothelial cell. After that, a new image denoising technique called Wavelet Transform Filter and Butterworth Bandpass for Segmentation (WTBBS) is used. Subsequently, brightness level correction is carried out by using the moving average filter and the CLAHE to reduce the effects of the non-uniform image lighting produced as a result of the previous step.

Figure 3 
                  The pre-processing phase.
Figure 3

The pre-processing phase.

3.2.1 CLAHE technique

This technique is designed to operate by splitting the input image into multiple non-overlapping elements of equal size. In order to make adequate statistical estimates for images having N × M pixels, the image is divided into 36 blocks using six divisions both horizontally and vertically. The block creation process is depicted in Figure 4, where the output blocks comprise regions designated into three groups. The first group comprises four regions, which are the corner region (CR) of the image. The next group comprises 16 regions referred to as the border region (BR) class. Of all the regions lying on the border of the image, the corners are the only excluded regions.

Figure 4 
                     Block creation by dividing the image into 36 blocks.
Figure 4

Block creation by dividing the image into 36 blocks.

The 16 remaining regions comprise the third group and are known as inner region (IR). The approach begins by determining the histogram for every region. Subsequently, the required contrast expansion is factored in, and the clipping threshold for the histogram is determined, after which, the histogram is rearranged so that the height does not violate the clipping threshold. Finally, the resulting contrast-limited histograms are processed using the cumulative distribution functions to obtain a greyscale mapping. The CLAHE technique is based on examining the four closest regions to the pixel of interest and using a linear combination of the mapping output. For the regions pertaining to the IR group, the abovementioned process is relatively straightforward. Nevertheless, the case is different for BR and CR groups because they require additional attention.

The histogram for every region is not complicated. In this case, for every greyscale in the region, the number of pixels without greyscale is counted. The histogram has a collection of all greyscale counts, as shown in Figure 5. This function provides a rough estimate of the greyscale density function. In order to achieve histogram equalisation, the CDF estimate is used. If the number of pixels and greyscales in every region is M and N, respectively; and if hi, j(n), for n = 0, 1, 2,…, N − 1, is the histogram for the region (i, j), then the corresponding CDF estimate, appropriately scaled by (N − 1) for greyscale mapping, is as follows:

(1) F i , j ( n ) = ( N 1 ) M k = 0 n h i , j ( k ) ; n = 1 , 2 , 3 , , N 1 .

Figure 5 
                     Histogram representation for every block of the input image (a). (b) The histogram for the border blocks, (c) the histogram for the inner blocks, and (d) the histogram for the output image.
Figure 5

Histogram representation for every block of the input image (a). (b) The histogram for the border blocks, (c) the histogram for the inner blocks, and (d) the histogram for the output image.

The given greyscale density function can be approximated to a uniform density function using the specified expression. This process of function conversion is known as histogram equalisation. It is limited by the maximum increase in region contrast. The contrast can be reduced to the desired level by limiting the maximum slope of equation (1). The clip limit β can be used for all histograms and sets the maximum slope. A relation between the clip limit and the clip factor, α expressed as a percentage, is as follows:

(2) β = M N 1 + α 100 ( S max 1 .

In this case, for a clip factor of zero (α = 0), the clip limit is equal to (M/N), thereby leading to a uniform distribution mapping of the regional pixels to the possible greyscale levels. In this case, the pixel values do not change. The maximum clip limit, achieved at α = 100, achieves a maximum value of ( S max · M / N ) . This condition restricts the slope to a permissible upper limit of S max , which typically has a value of four in the case of still images. Nevertheless, it is recommended that for any other application, the appropriate choice, r, for S max should be obtained experimentally.

A change in the clip factor ranging between 0 and 100 is associated with the corresponding change in the maximum slope mapping in the range 1 to S max . The required threshold for image contrast modification is a determinant of how much the original histogram has changed. For every greyscale, the upper limit of counts is restricted to β. After this threshold is reached, the extra counts are distributed uniformly between the greyscales under the condition that the clip limit is not breached. The distribution is such that the count for any greyscale never exceeds β. Several iterations may be required for every histogram. Typically, the number of iterations may increase as the clip factor percentage is reduced. Such that the greyscale for every region is obtained by using equation (1) on its modified histogram. The quadrant mapping of the regions in the IR group is done based on the characteristics of its four nearest regions. Figure 6 depicts how the CLAHE algorithm processes the original image. It can be seen that the image contrast is enhanced and the cell borders (ROI) are more precise. The output of this algorithm is used as input for the next stage.

Figure 6 
                     The effect of CLAHE for sample 1: (a) the original image and (b) the filtered image.
Figure 6

The effect of CLAHE for sample 1: (a) the original image and (b) the filtered image.

3.2.2 New denoising scheme based on Enhanced WTBBS

For noise reduction, while the edges of the cells are highlighted, the output of CLAHE is used as input for the WTBBS scheme, which consists of an enhanced version of the classical wavelet transform based on the frequency domain. An input signal is represented as a set of orthonormal wavelets which depends on a single wavelet role known as the mother wavelet function; this representation is created using repetitive translations. Wavelet transform involves the use of filters. The prominent and typically used filters comprise the Daubechies (DB) family of filters, which comprises numerous groups of Daubechies like db2 (also known as the Haar channel), db4, db6, and db8, which have lengths of 2, 4, 6, and 8, respectively.

When the scaling and wavelet functions of this method are used, the image is divided into four diverse recurrence groups. During the first step, the information contained in the image is separated into four different sub-bands, named as, LL (low-low), LH (low-high), HL (high-low), and HH (high-high). The details of the image are presented on a different plane for every sub-band. For example, the LL and LH sub-bands represent data with low recurrence and vertical data, respectively. At the same time, HH and HL sub-bands represent diagonal information and data levels, respectively. Subsequently, enhancement using the WT method includes the elimination of the high frequencies based on the threshold value for the sub-bands that include high-frequency information. The Butterworth bandpass filter is used for the LL-sub-band where the threshold regulates the partial elimination of the high frequencies that create noise in the image. Hence, the threshold determination is considered as the most crucial stage in this proposed method. The threshold was accurately determined since it controls vital information concerning the cell body and borders. An inappropriate threshold could lead to incorrect segmentation during the subsequent process, where, if the threshold is small, there will still be noise. On the other hand, a large threshold would lead to the elimination of crucial regions from the image and cause blurring. Hence, it is crucial to select the threshold accurately, considering both hard and soft thresholds. It should be noted that a soft threshold provides better performance compared to a hard threshold (Figures 7 and 8).

Figure 7 
                     Sub-bands LL, LH, HL, and HH.
Figure 7

Sub-bands LL, LH, HL, and HH.

Figure 8 
                     Sub-bands for the vertical, diagonal, approximation, and horizontal information concerning the image.
Figure 8

Sub-bands for the vertical, diagonal, approximation, and horizontal information concerning the image.

The primary objective of the Bayes Shrink technique is to reduce the Bayesian risk called Bayes Shrink. This technique utilises a soft threshold and depends on sub-bands. It means that the threshold operator is used during the resolution of every band during WT division. This threshold is considered as smoothness adaptive. The Bayes threshold ( t B ) can be defined using equation (1).

(3) t B = σ 2 / σ s 2 ,

where σ 2 represents noise variance and σ s 2 represents the signal variance without noise. Estimating the noise variance (σ 2) from the HH sub-band is based on the median estimator, as specified in equation (4).

(4) w ( x , y ) = s ( x , y ) + n ( x , y ) .

Since both functions are independent of each other, the signal and the noise can be expressed using:

(5) σ w 2 = σ s 2 + σ w 2 .

σ w 2 and signal variance σ s 2 can be calculated using equation (6).

(6) σ w 2 = 1 n 2 x , y = 1 n w 2 ( x , y ) ,

(7) σ s = max ( σ w 2 σ 2 , 0 ) .

Using σ 2 and σ s 2 , the Bayes threshold has been calculated based on equation (3).

As shown in Figure 9, the soft threshold has been used for three sub-bands, which are HH, HL, and LH, to reduce noise and remove unwanted content which is not related to the cell edges. To highlight the cell border and make the structure of the cells clearer, the Butterworth Bandpass filter is used only for the LL sub-band. The mathematical derivation of this filter requires the multiplication of the high and low pass filter transform functions. The cut-off frequency threshold is higher for a low pass filter.

(8) Hlp ( u , v ) =   1 1 + D ( u , v ) D l 2 n ,

(9) Hhp ( u , v ) = 1   1 1 + D ( u , v ) D h 2 n ,

(10) HBP  ( u , v ) = HLP ( u , v ) HHP ( u , v ) ,

where D _ H and D _ L denote the frequency thresholds for the high and low pass filters, respectively; n denotes the filter order, while D ( u , v ) represents the distance of the point from the origin. The Butterworth filter comprises a smooth transfer function that lacks discontinuity or a precisely defined frequency cut off. The filter order is a significant determinant of the frequency range allowed by the filter. During the selection of n, a trade-off needs to be made between the requirements of the spatial domain (quicker decay) and the frequency domain (sharper cut off). As specified previously, the output of CLAHE has been used as an input for the WT. Figure 10 depicts the application of the Enhanced WT filter on the CLAHE images to make the object clear.

Figure 9 
                     WTBBS schema for endothelial cell segmentation.
Figure 9

WTBBS schema for endothelial cell segmentation.

Figure 10 
                     The effects of enhanced such that (a) is the resulting image from CLAHE and (b) is the result of enhanced WT.
Figure 10

The effects of enhanced such that (a) is the resulting image from CLAHE and (b) is the result of enhanced WT.

3.2.3 Moving average filter

Moving average is an algebraic expression that defines an operation needed to be performed on image neighbourhoods under the conditions defined by a geometric rule. An image designated f, when required to be filtered using a window (designated B) that gathers the grey-level information according to the geometry of the window, has the moving-average processed image (G) specified as:

(11) G ( n ) = AVE [ B f ( n ) ] ,

where the operation AVE computes the sample average. Hence, the calculations concerning the local averages are conducted over local image neighbourhoods, thereby leading to superior smoothening. This process uses the image output by the Butterworth Bandpass as input. The process begins by applying the moving-average technique, followed by the CLAHE filter to produce the final processed image having border highlighting concerning the ROI. The proposed technique facilitates cell visibility by reducing the overlap between the cells, thereby making the image clearer for the subsequent segmentation stage (Figure 11).

Figure 11 
                     Depicts the effects of using the moving-average technique and CLAHE filter for two cases a and b.
Figure 11

Depicts the effects of using the moving-average technique and CLAHE filter for two cases a and b.

3.3 Trainable segmentation and distance transform (TDWS) to enhance the watershed transform for cell segmentation

The moving average filter produced an enhanced image but these images still contain numbers of overlap cells. This scenario warranted the use of the watershed transform to segment and reduce the overlap between the cells. It is possible to determine two overlapping objects if differentiation can be made concerning the shapes of those objects. Therefore, it is essential to separate all overlapping objects. For the accurate estimation of the borders of the overlapping artefacts, the proposed technique includes several processing steps beginning from segmentation, followed by Euclidean distance transformation, H-minima, and, finally, the watershed transforms. Overlapping of objects can be avoided if the borders are estimated using segmentation after the first process that facilitates the detection of aggregates. Detection of aggregates and segmentation is conducted using a commonly used process as described below:

  1. Input the filtered test image.

  2. Binarise the greyscale filtered image to get the initial segmented image.

  3. Use the distance transformation for all the binary images.

  4. Determine the minimum values for every object and use it as the seed for the watershed transform (by using the HMIN marker technique).

  5. Enhanced Watershed transform and Voronoi tessellation markers have been presented.

3.3.1 Initial segmentation

In this study, due to the significant difference in the intensity values of the images, a new hybrid technique based on Fractal Dimension [27] was proposed for the segmentation and extraction of the poorly defined cell borders. It was indicated in the literature that the threshold method was used for binarising the image due to its simplicity compared to other segmentation techniques. The threshold method may be more efficient in segmenting images, especially for specific applications. Frequently, one threshold value was used for segmenting the image into borders and objects. However, multiple thresholds are commonly used for segmenting the images. Additionally, thresholds are categorised as global or local depending on the local features of the different parts of the image. This study presents a clear picture of how the use of thresholds alone is not sufficient to achieve image segmentation when there is under or over-segmentation concerning some parts of the object. Therefore, in this study, a method is suggested for the identification of the cell through the use of fractal dimension features. The primary aspect, in this case, involves capturing the border of the cells. The primary objective behind conducting the segmentation, in the beginning, is to demarcate the cell from the other parts of the image. The first step comprises model training, where numerous images are used for training the model. Several samples are chosen from every image, and fixed size (11 × 11) region is used from inside the cell (Class 1) and the cell border (Class 2). This objective is achieved by implementing a semi-automatic mechanism so that the samples can be generated.

The fractal dimension was used for every window of each class rather than using the pixel intensities. The training of a KNN classifier was conducted using the labelled features obtained from the samples (Figure 12, training phase). For every block, three fractal dimension values have been calculated. Every FD is used as an input for the KNN classifier. When training is finished, the testing set is used to feed images for segmentation (testing phase is depicted in Figure 12). The blocks are scanned at a pixel level, where, for every pixel, a fixed size square region is constructed using the pixel as the centre. Subsequently, three FD features comprising the region are isolated and classified into two classes using the trained KNN classifiers, which are designated “inside cell” or “cell border.” After pixel labelling is completed, the “inside cell” region is considered as the segmented cell.

Figure 12 
                     Initial level trainable cell segmentation.  (a) Training phase, (b) testing phase.
Figure 12

Initial level trainable cell segmentation. (a) Training phase, (b) testing phase.

The feature descriptor comprises a fractal dimension histogram that facilitates object identification from digital images. The description of the region cannot be generated only by the one FD. Thus, the three threshold values (median value threshold, mean value threshold, and out’s threshold) have been used for calculating the three FD values. The calculation concerning the FD feature for every binary block is done using the fractal dimension. The complexity concerning the shape or texture of an object can be determined by measuring the fractal dimensions concerning the context of image analysis paradigm. The description of the fractal dimensions is given by a fractal geometry depending on different approaches. One of the basic fractal dimensions is the Hausdorff dimension. For an object that has a Euclidean dimension E, the Hausdorff fractal dimension D 0 is calculated using the following equation:

(12) D = [ lim] ( ε 0 ) ( log N ( ε ) ) / ( log ε ( 1 ) ) .

3.3.2 Distance transform and H-minimum marker for the watershed transform

The binary image distance transformation is resolved in the following way: For every pixel x in group A, DT (An) is its distance from x to the complement of A:

(13) DT ( A ) ( X ) = min { d ( x , y ) , y A c }   .

Therefore, the calculation of the binary image distance transform can be done following the assumption that A c is the group of 1-value pixels. This leads to the formation of a greyscale sample that can be segmented through the use of the watershed transformation. However, the watershed technique is prone to over-segmentation, except if the appropriate choice and use of markers have been made. The outcome of a typical distance transformation computation is illustrated in Figure 13. A binary image of two overlapping objects is created from the initial “coarse” segmentation (Figure 13(a)). However, the overlapping objects are not distinguished using coarse segmentation. Consequently, this binary image can be generated by using trainable neural networks. After the use of the distance transform for the generation of the greyscale image shown in Figure 13(c), the original binary images are no longer required (Figure 13(b)). Referring to Figure 13(b), the maxima correspond to black areas far from the white background. Notwithstanding this scenario, figure morphology can be found to have numerous local maxima. The greyscale in Figure 13(c) is complemented to generate the image depicted in Figure 13(d) having a white background, and the previous maxima are now depicted as minima. This technique is performed as an internal distance transformation. Subsequently, the watershed transform is applied to the image depicted in Figure 13(d) so that the overlapping objects can be separated. The region that requires segmentation can be complemented outside the aggregate to identify sufficient external markers. When over-segmentation occurs, specious minima can be created, thereby necessitating the creation of suitable markers.

Figure 13 
                     (a) Endothelial cells with original greyscale pixels, (b) binary image of overlapping objects, (c) distance transform of the image, (d) marker control for the cells, and (e) watershed transform output.
Figure 13

(a) Endothelial cells with original greyscale pixels, (b) binary image of overlapping objects, (c) distance transform of the image, (d) marker control for the cells, and (e) watershed transform output.

Using a procedure where the inner marking processes are grouped, the use of several greyscale morphological functions can be facilitated. The specious minima have adverse effects that can be regulated before the application of the watershed technique. The morphological functions are specified as follows:

  • The minima were used in precise facts using the insertion of a −∞ value at a specified position to facilitate the removal of the local minima corresponding to other areas pertaining to the greyscale image.

  • Placing these at the appropriate places leads to the creation of useful markers.

  • The output of the inner distance transform can be used to apply H-minima so that all minima having depth values less than a specified positive value can be removed. This process allows the minima having depth falling in the size threshold to be removed, thereby allowing the usage of the created area minima as effect indicators.

This process may be implemented to completion using a morphological sub-geodesic reconstruction (∇D) pertaining to the surface density concerning image f. The surface increased by the threshold is determined by h. It is also capable of determining the constructing part, D, that describes the connectivity (Soille, 2013).

(14) [ HMIN] _ ( h D )   ( f ) = f ( _ D )   ( f + h ) .

Although the description of the process is simple, determining an appropriate H-minima transformation threshold and an appropriate size to execute the transformation is a challenging task. The markers can get lost, thereby leading to the retention of the minima. Along the same lines, the local minima may converge, thereby counteracting the disengagement of the marker. Evaluation indicates that a crucial prerequisite is that the possible minima and the base in mass associations should be disposed of. This must be done while maintaining the detachment of the two minima situated at the surmised midpoint of the overlapping objects that required segmentation. In this study, the fast immersion-dependent watershed transformation is applied to the output of the gradient distance transformation so that the initial separation of overlapping objects can be achieved (Figure 14).

Figure 14 
                     Images depicting the output of the TWDS segmentation-based method.
Figure 14

Images depicting the output of the TWDS segmentation-based method.

To determine the clinical features, the object must be clear. For this purpose, drawing the cell borders and making them more precise for the next stage (calculating the clinical features) are required. The Voronoi diagram has been used for final segmentation to identify the border and make it clearer. This technique has been used with the output of the watershed transformation (segmented image).

3.4 Medical feature measurement

As mentioned before, the main objective of the proposed system is to determine the clinical features. Based on the previous stages, a segmented image having clear cell border is produced. This stage involves the extraction of the clinically valuable features from the segmented endothelial cell images using objective and automatic schemes. The intention is to describe the health status of the endothelial cell images using pleomorphism, MCD (cell per mm2), polymegathism, mean cell parameters MCP (µm), and mean cell area (MCA) (µm2). The morphological feature extraction by the Trainable Model proposed for segmenting the endothelial cells is reported in Figure 15. The low quality of these images makes it challenging to accurately segment the cell and estimate the morphological features in some regions which are extremely dark, blurred, or highly reflective. To address this issue and enhance the clinical analysis, the proposed system requires inputs from an ophthalmologist for selecting and producing the most visible ROI in the segmenting sample. Afterwards, the morphological feature of the cropped region is calculated automatically. In this manual calculation, the cells intersecting only with the nearby binary borders of the frame are included. In contrast, those intersecting with other borders of the frame are excluded. Nevertheless, the use of the entire image implies the exclusion of all the outmost cells from the statistical computation.

Figure 15 
                  Medical feature extraction.
Figure 15

Medical feature extraction.

The objective of this step is to prevent the inadvertent segmentation of the cells comprising the edges of the input sample.

Mean cell density (MCD) is defined as the number of endothelial cells (C number) in the trimmed ROI (or complete picture) separated using the all-out size (A) of the modified ROI (or complete picture), as specified below:

MCD = C  number A .

  1. Polymegathism (coefficient of variation [CV]) is a coefficient used to determine the variation within the region comprising the endothelial cells. If the standard deviation (SD) of mean cell area (MCA) rises, there are higher chances of error in determining the coefficient. Hence, when polymegathism rises, estimation precision of MCA reduces (McCarey, Edelhauser & Lynn, 2008). The mathematical expression for polymegathism is as follows:

    Polymegathism =   SD cell area MCA × 100 ,

    where SDcell area is the standard deviation in the cell area distributed over the MCA.

  2. Pleomorphism (hexagonality coefficient [HC]) is defined as the number of cells having an approximately hexagonal shape (six sides). It can be expressed as the C hexagonal divided by the number of cells in the trimmed ROI (or complete picture) C image, as follows:

Pleomorphism = C  hexagonal C  image × 100 .

4 Segmentation assessment results

The proposed segmentation model has been explained in detail. To evaluate the proposed model, we have measured the closeness and the correlation between the manual features determined by the expert and the automated features extracted by the proposed model. Linear regression was used to measure correlation, which evaluates the effectiveness of the proposed segmentation algorithm, and the automatic measurements of the endothelial eell feature, i.e. density, cell area, CellPer, polymegathism, and pleomorphism, are compared with the equivalent manual measurements taken by a domain expert and provided in the ground truth document. The evaluation of features, was done using several regressions statistical method to show the consistency between the manual and the automatic measurements through regression and correlation.

Also, the Bland–Altman method is used to measure the closeness between the two solutions which is a scatter plot method invented by Bland and Altman (1986) that describes the agreement or disagreement between two measurements measured quantitatively.

4.1 Automated vs manual measurements

Watershed Transform was applied on images. We have used five clinical features to determine the correlation of these features with manual observations. Figure 16 depicts density measurements using automated and manual techniques. Table 1 indicates that R 2 for this feature equals 0.7301, and the slope of the line is approximately 45°.

Figure 16 
                  Correlation for the density feature for the images segmented using Watershed Transform-based model.
Figure 16

Correlation for the density feature for the images segmented using Watershed Transform-based model.

Table 1

R 2 values

Density Cell area CellPer Polymegathism Pleomorphism
R 2 0.7301 0.8764 0.828 0.7695 0.7112

Furthermore, based on the R 2 concerning the cell area, it can be observed that the proposed model measures cell area with higher accuracy compared to density (Figure 17).

Figure 17 
                  Correlation for the cell area feature for the segmented images.
Figure 17

Correlation for the cell area feature for the segmented images.

The correlations for CellPer, polymegathism, and pleomorphism are depicted in Figures 1820, respectively. Table 1 indicates the R 2 for every feature.

Figure 18 
                  Correlation for the cell per feature for the segmented images.
Figure 18

Correlation for the cell per feature for the segmented images.

Figure 19 
                  Correlation for the polymegathism feature for the segmented images.
Figure 19

Correlation for the polymegathism feature for the segmented images.

Figure 20 
                  Correlation for the pleomorphism feature for the segmented images.
Figure 20

Correlation for the pleomorphism feature for the segmented images.

Figures 2125 depict the Bland–Altman scatter plot for density, cell area, CellPer, polymegathism, and pleomorphism. Table 2 contains information about the mean difference, confidence limits, and the parameters for the linear fit.

Figure 21 
                  Bland–Altman scatter plot for the density feature.
Figure 21

Bland–Altman scatter plot for the density feature.

Figure 22 
                  Bland–Altman scatter plot for the cell area feature.
Figure 22

Bland–Altman scatter plot for the cell area feature.

Figure 23 
                  Bland–Altman scatter plot for the CellPer feature.
Figure 23

Bland–Altman scatter plot for the CellPer feature.

Figure 24 
                  Bland–Altman scatter plot for the polymegathism feature.
Figure 24

Bland–Altman scatter plot for the polymegathism feature.

Figure 25 
                  Bland–Altman scatter plot for the pleomorphism feature.
Figure 25

Bland–Altman scatter plot for the pleomorphism feature.

Table 2

Mean difference, confidence limits, and the parameters for the linear fit

Density Cell area CellPer Polymegathism Pleomorphism
Mean difference 12.62 −7.31 0.31 −0.75 −0.44
Confidence limits 454.86 25.48 4.56 4.97 5.51
−429.61 −40.10 −3.94 −6.48 −6.41
The parameters for the linear fit −0.05 −0.17 −0.05 −0.25 −6.41
170.54 43.79 3.96 10.92 −6.41

5 Conclusion

Several methods were used in this article as significant components for endothelial cell segmentation. The existing methods of image enhancement and segmentation have been enhanced considerably via original ideas. Major contributions of the present study on medical feature extraction based on segmentation are enumerated and ranked from top to bottom according to the degree of importance. New scheme of image enhancement was achieved through three novel ideas: (a) applying modified CLAHE approach which focuses on ROI to enhance the contrast of entire image evenly. (b) New approach to denoise based on Enhanced WTBBS. (c) The images from previous steps were treated by applying moving average filter combined with CLAHE. The process begins by applying the moving-average technique, followed by the CLAHE filter to produce the final processed image having border highlighting concerning the ROI. TDWS segments images with small and overlapping cells. Therefore, it is essential to separate all overlapping objects for the accurate estimation of the borders of the overlapping artefacts.

  1. Conflict of interest: The authors declare that there is no conflict of interest regarding the publication of this article.

References

[1] Vigueras-Guillen JP, Andrinopoulou ER, Engel A, Lemij HG, van Rooij J, Vermeer KA, et al. Corneal endothelial cell segmentation by classifier-driven merging of oversegmented images. IEEE Trans Med Imaging. 2018;37(10):2278–89.10.1109/TMI.2018.2841910Search in Google Scholar

[2] Bourne WM. Biology of the corneal endothelium in health and disease. Eye. 2003;17(8):912–8.10.1038/sj.eye.6700559Search in Google Scholar

[3] Hoppenreijs VPT, Pels E, Vrensen GFJM, Treffers WF. Corneal endothelium and growth factors. Surv Ophthalmol. 1996;41(2):155–64.10.1016/S0039-6257(96)80005-1Search in Google Scholar

[4] Kitzmann AS, Winter EJ, Nau CB, McLaren JW, Hodge DO, Bourne WM. Comparison of corneal endothelial cell images from a noncontact specular microscope and a scanning confocal microscope. Cornea. 2005;24(8):980–4.10.1097/01.ico.0000159737.68048.97Search in Google Scholar PubMed

[5] Salvetat ML, Zeppieri M, Miani F, Parisi L, Felletti M, Brusini P. Comparison between laser scanning in vivo confocal microscopy and noncontact specular microscopy in assessing corneal endothelial cell density and central corneal thickness. Cornea. 2011;30(7):754–9.10.1097/ICO.0b013e3182000c5dSearch in Google Scholar PubMed

[6] Huang J, Maram J, Tepelus TC, Sadda SR, Chopra V, Lee OL. Comparison of noncontact specular and confocal microscopy for evaluation of corneal endothelium. Eye Contact Lens. 2018;44:S144–50.10.1097/ICL.0000000000000362Search in Google Scholar PubMed

[7] Nadachi R, Nunokawa K. Automated corneal endothelial cell analysis. Proceedings Fifth Annual IEEE Symposium on Computer-Based Medical Systems; 1992, p. 450–457.Search in Google Scholar

[8] Vincent LM, Masters BR. Morphological image processing and network analysis of cornea endothelial cell images. Image Algebr Morphol Image Process III. 1992;1769:212–26.10.1117/12.60644Search in Google Scholar

[9] Angulo J, Matou S. Automatic quantification of in vitro endothelial cell networks using mathematical morphology. Proceedings of the 5th IASTED International Conference on Visualization, Imaging, and Image Processing VIIP 2005; 2005, p. 51–6.Search in Google Scholar

[10] Gavet Y, Pinoli J-C. Visual perception based automatic recognition of cell mosaics in human corneal endothelium microscopy images. Image Anal Stereol. 2008;27(1):53–61.10.5566/ias.v27.p53-61Search in Google Scholar

[11] Bullet J, Gaujoux T, Borderie V, Bloch I, Laroche L. A reproducible automated segmentation algorithm for corneal epithelium cell images from in vivo laser scanning confocal microscopy. Acta Ophthalmol. 2014;92(4):312–7.10.1111/aos.12304Search in Google Scholar PubMed

[12] Gavet Y, Pinoli JC. Comparison and supervised learning of segmentation methods dedicated to specular microscope images of corneal endothelium. Int J Biomed Imaging. 2014;ii:2014–13.10.1155/2014/704791Search in Google Scholar PubMed PubMed Central

[13] Selig B, Vermeer KA, Rieger B, Hillenaar T, Luengo Hendriks CL. Fully automatic evaluation of the corneal endothelium from in vivo confocal microscopy. BMC Med Imaging. 2015;15(1):1–15.10.1186/s12880-015-0054-3Search in Google Scholar PubMed PubMed Central

[14] Dagher I, El Tom K. WaterBalloons: a hybrid watershed Balloon Snake segmentation. Image Vis Comput. 2008;26(7):905–12.10.1109/IJCNN.2007.4370921Search in Google Scholar

[15] Charłampowicz K, Reska D, Bołdak C. Automatic segmentation of corneal endothelial cells using active contours. Adv Comput Sci Res. 2014;11:47–60.Search in Google Scholar

[16] Foracchia M, Ruggeri A. Cell contour detection in corneal endothelium in-vivo microscopy. Annu Int Conf IEEE Eng Med Biol - Proc. 2000;2:1033–4.10.1109/IEMBS.2000.897902Search in Google Scholar

[17] Ruggeri A, Scarpa F, De Luca M, Meltendorf C, Schroeter J. A system for the automatic estimation of morphometric parameters of corneal endothelium in alizarine red-stained images. Br J Ophthalmol. 2010;94(5):643–7.10.1136/bjo.2009.166561Search in Google Scholar PubMed

[18] Foracchia M, Ruggeri A. Corneal endothelium cell field analysis by means of interacting Bayesian shape models. Conf. Proc. IEEE Eng. Med. Biol. Soc. 2007;6036–9.10.1109/IEMBS.2007.4353724Search in Google Scholar PubMed

[19] Poletti E, Ruggeri A. Segmentation of corneal endothelial cells contour through classification of individual component signatures. XIII Mediterranean Conference on Medical and Biological Engineering and Computing 2013, 2014; p. 411–4.10.1007/978-3-319-00846-2_102Search in Google Scholar

[20] Scarpa F, Ruggeri A. Development of a reliable automated algorithm for the morphometric analysis of human corneal endothelium. Cornea. 2016;35(9):1222–8.10.1097/ICO.0000000000000908Search in Google Scholar PubMed

[21] Khan MA, Khan MK, Khan MAU, Lee S. Endothelial cell image enhancement using decimation-free directional filter banks IEEE Asia-Pacific Conference of Circuits Systems and Proceedings, APCCAS, 2006, p. 884–7.10.1109/APCCAS.2006.342183Search in Google Scholar

[22] Sharif MS, Qahwaji R, Shahamatnia E, Alzubaidi R, Ipson S, Brahma A. An efficient intelligent analysis system for confocal corneal endothelium images. Comput Methods Prog Biomed. 2015;122(3):421–36.10.1016/j.cmpb.2015.09.003Search in Google Scholar PubMed

[23] Vigueras-Guillen JP, Van Rooij J, Lemij HG, Vermeer KA, Van Vliet LJ. Convolutional neural network-based regression for biomarker estimation in corneal endothelium microscopy images. 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society; 2019. p. 876–81.10.1109/EMBC.2019.8857201Search in Google Scholar PubMed

[24] Meziou L, Histace A, Precioso F, Matuszewski BJ, Murphy MF. Confocal microscopy segmentation using active contour based on alpha (α)-divergence. 2011 18th IEEE International Conference on Image Processing; 2011. p. 3077–80.10.1109/ICIP.2011.6116315Search in Google Scholar

[25] Sheppard CJ, Gan X, Gu M, Roy M. Signal-to-noise ratio in confocal microscopes. Handbook of biological confocal microscopy. Boston, MA: Springer; 2006. p. 442–5210.1007/978-0-387-45524-2_22Search in Google Scholar

[26] Choraś RS. Cell detection in corneal endothelial images using directional filters. Adv Intell Syst Comput. 2012;389:V. 10.1007/978-3-319-23814-2_14.Search in Google Scholar

[27] Costa AF, Humpire-Mamani G, Traina AJM. An efficient algorithm for fractal analysis of textures. 2012 25th SIBGRAPI Conference on Graphics, Patterns and Images; 2012. p. 39–46.10.1109/SIBGRAPI.2012.15Search in Google Scholar

Received: 2021-09-07
Revised: 2021-09-07
Accepted: 2021-10-11
Published Online: 2022-03-23

© 2022 Ahmed Saifullah Sami and Mohd Shafry Mohd Rahim, published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 16.11.2024 from https://www.degruyter.com/document/doi/10.1515/jisys-2021-0191/html
Scroll to top button