Nothing Special   »   [go: up one dir, main page]

Skip to main content

Robust c-prototypes algorithms for color image segmentation

Abstract

In this paper, we present a modified clustering algorithm to segment color images. The proposed technique is based on the robust c-prototypes algorithm with some modifications in its objective function and updating equations. The JK chromatic subspace of the IJK color space is used to segment color images. The algorithm performance is tested on real images with natural artifacts that make the segmentation process difficult. Additionally, a remote sensing image is segmented to demonstrate that the proposed algorithm can be used in real applications. Simulation results indicate that the proposed method is more effective than others proposed in the literature in terms of objective and subjective criteria.

1 Introduction

Clustering is the process to discover and organize objects into subsets of similar objects called clusters. Pixel clustering in three-dimensional (3D) color space on the basis of color similarity is one of the current approaches in the field of color image segmentation. Colors in an image create dense clusters in the color space in a natural way and look like three pixel clouds representing clusters in the color space. Clustering is often seen as an unsupervised classification of pixels, and usually, a priori knowledge of the image is not used during the clustering process [1].

Different clustering techniques have been proposed to color image segmentation [2]. One of the most popular clustering techniques is the fuzzy c-means (FCM) technique [3]. In general, the FCM algorithm is a highly effective methodology to segment noise-free images, but in the presence of natural artifacts (noise, intensity, or color inhomogeneity in the regions, the regions with similar textures, shadows, object reflections, etc.), the FCM has two shortcomings that make it very sensitive: in its conventional nomenclature, it does not consider any spatial information in the image context [48], and the second one is that the objective function can be seen as a formulation of the least squares method, in which one tries to minimize the error between the feature vector and the vector with the centers of the groups. Outliers have a great effect during minimization since there is a quadratic function in the objective function of the FCM algorithm; so, it is necessary to use a quadratic function with the property of being less increasing, and thereby control the influence of atypical information.

Many researchers have taken FCM algorithm as the starting point and modify it to solve the first shortcoming. In [4], a spatial function is introduced into the membership function; this spatial function is the summation of the membership function in the neighborhood of each pixel under consideration. Ahmed et al. [5] modified the objective function of the FCM by adding a regularizer which considers the labels of all the neighboring voxels in an image to compensate the bias field effect of the data. Other authors such as Zhang and Chen [6] follow the same work of Ahmed et al. but their algorithms introduced the median- and mean- filtered images, which can be computed in advance and hence, can reduce the computation time. Zhao et al. [7] used an adaptive spatial parameter for each pixel that was designed to make the non-local spatial information of each pixel playing a different role to guide the noisy image segmentation. Liu and Pham [8] modified the FCM by adding a spatial penalty term into the objective function. The penalty term considers the fuzzy memberships from the neighbors of the current samples, often a pixel or voxel in 2D or 3D images. By this means, the clustering algorithm can take both the image-intensity information and spatial information into account and hence is more robust to noise and outliers producing better segmentation results. Other researchers have modified the FCM algorithm to get more robust segmentation hybrid algorithms by using other important approaches. For example, in [9], a method that combines the benefits stemming from the sound spatial coherency modeling capabilities of the Hidden Markov random field (HMRF) model and the enhanced flexibility obtained by the FCM algorithm was proposed. Siang and Ashidi [10] proposed an approach that analyzes the uniformity of the regions in color images by the histogram thresholding technique, and then the FCM is used to improve the compactness of the clusters forming uniform regions. A proposal based in the use of hyperplanes in the FCM algorithm and spatial constraints is presented in [11]; in this algorithm, a spatial regularizer is added into the fuzzy hyperplane-based objective function, but taking into account information of inherently spatial data. In [12], the authors presented an approach that relies on cluster center initialization and color quantization allowing faster and more accurate convergence such that it is suitable to segment very large color images. An analysis of different methods of initialization for the k-means algorithm for segmenting images is presented in [13]. Dae-Won et al. proposed in [14] an algorithm that extracts the most vivid and distinguishable colors referred as the dominant colors, the color points closest to these dominant colors are selected as the initial centroids in the FCM calculations using a fuzzy membership model between a color point and a reference color. Finally, an approach designed to promote an optimal initialization scheme of the FCM algorithm using a single point iterative weighted based upon the prior information was presented in [15].

A method for solving the second of the shortcomings of the FCM algorithm is based on the exploitation of the robust estimators especially the M-estimator [16]. This proposal was studied by Frigui and Krishnapuram [17], who designed their own robust estimator, based on special loss and weight functions. In this algorithm, the robust estimators allow the control of the influence of outliers in calculating the centers of the all groups. This paper is focused on the second shortcoming of the FCM algorithm. For this, we propose to use classical M-estimators and as the robust framework [18], states to use their loss and influence functions instead. We extent the proposed algorithm in a bidimensional clustering approach using the chromatic subspace in the IJK color space to segment color images [19].

2 Background

2.1 Fuzzy c-means clustering

Fuzzy cluster analysis allows gradual membership of data points to clusters measured as degrees in [0, 1] providing the flexibility needed to express that data points can belong to more than one cluster. Additionally, the membership degrees offer a finer degree of detail of the data model and can express how a data point should belong to a cluster or not [16, 20].

Let X = {x 1, …, x n } be the set of given featured data and let c the number of clusters (1 < c < n) represented by the fuzzy set C j (j = 1, …, c). Then, U f  = (u ij ) is called the fuzzy cluster partition of X if i = 1 n u ij < 0 , j {1, …, c} and j = 1 c u ij = 1 , i {1, …, n} hold. A fuzzy cluster model of a given data set X into c clusters is defined to be optimal when it minimizes the following objective function under the above constraints,

J f X ; U f , C = i 1 n j = 1 c u ij m x i c j 2 ,
(1)

where x i  − c j 2 is the square of the Euclidean distance from feature vector x i to the center of the class c j and the parameter m ≥ 1 is a weighting exponent called the fuzzifier. The value of m determines the ‘fuzziness’ of the classification; when m = 1, there is a hard assignment of the data in the classes (k-means algorithm), on the other hand when m = 2 the data are gradually assigned with an interval of [0,1] to the different groups (FCM algorithm). The objective function J f is alternately optimized using the membership degrees u ij and the cluster centers c j by setting the derivative of J f with respect to the parameters equal to zero taking into account the constraint stated above. Finally, the equations for the two iterative steps that form the FCM algorithm are given as follows:

u ij = 1 k = 1 c x i c j 2 x i c k 2 1 m 1 = x i c j 2 m 1 k = 1 c x i c k 2 m 1
(2)
c j = i = 1 n u ij m x i i = 1 n u ij m
(3)

2.2 Robust m-estimator

M-estimators are a generalization of maximum likelihood estimation (MLE) and were proposed by Peter Hubert [18, 2123]. Their definition is given by a robust loss function ρ(x) = ln (f(x)), connected with the probability density function f(x) for the sample data x i , i = 1, …, n. The objective of M-estimators is to find an estimation θ ^ of such that

θ ^ = argmin θ Θ i = 1 n ρ x i θ
(4)

The estimation of the localization parameter θ can be found by calculating the partial derivative of ρ with respect to θ introducing the influence function ψ(x, θ) = ∂ ρ(x, θ)/ ∂ θ,

i = 1 n ψ x i θ = 0
(5)

Definition (5) is not fully equivalent to (4). However, it is used to find a solution for the minimization (4). Both definitions of the M-estimator are implicit, so are necessary iterative techniques for the calculation of the output. Here, we derive one of them by assuming lim x 0 ψ x x = c . We can rewrite (5) in the form,

i = 1 n ψ x i θ = x i 0 ψ x i θ x i θ x i θ
(6)

from which we can easily find an implicit formula for the estimate,

θ k + 1 = i = 1 n ω i , θ k x i i = 1 n ω i , θ k , ω i , θ = ψ x i θ x i θ x i 0 c otherwise
(7)

where ω i,θ is commonly called the weight function. The iterative solution of the M-estimator is usually called W-estimator.

3 Proposed approach

3.1 Robust c-prototypes

As we see in section 2.1, the FCM algorithm is based on a least squares objective function. It is well known that the least square approach is highly sensitive to aberrant points, which implies that FCM gives unsatisfactory results when it is applied to data sets contaminated with noise and outliers. To solve this problem, Frigui and Krisnapuram [16, 17] proposed an algorithm called Robust c-prototypes with the following objective function,

J r X ; U f , C = i = 1 n j = 1 c u ij m ρ j x i c j 2 .
(8)

There is one robust loss function ρ j associated with each cluster. As conventional algorithm, the optimization of this objective function under the same conditions, yields the following updated expressions for the membership matrix and the vector with the centers of the groups:

u ij = 1 k = 1 c ρ j x i c j 2 ρ j x i c k 2 1 m 1
(9)
i = 1 n u ij m w ij x i c j 2 c i = 0 , where w ij = ρ j x i c j 2 x i c j 2
(10)

It is clear that this algorithm can be seen as a generalization of the M-estimator for estimating the prototypes of the FCM algorithm. The expression for the upgrade of the prototypes vector (centers of the groups) is obtained by solving Equation 10, iteratively, which is equivalent to solving the W-estimator (7). By convenience, our proposal makes two significant changes of this algorithm. First, we use the same loss function for all groups. After, the minimization of the objective function with respect to the prototype vector is prevented from entering a new iterative cycle to solve the expression (7) (we must remember that the FCM algorithm is an iterative method), so instead of using the W-estimators, we use the influence function ψ. As was established in section 2.2, the second definition of the M-estimators is based on the existence of the influence function ψ(x, θ) = ∂ ρ(x, θ)/ ∂ θ. Thus, it is not possible to rewrite the expression (10) in terms of the influence function for the W-estimators. Based on these two arguments, the objective function can be written as follows:

J rf X ; U f , C = i = 1 n j = 1 c u ij m ρ x i c j 2
(11)

In this algorithm and its extension to the color space, we use a gradual assignment of the data to the groups as the conventional FCM algorithm with a value of m = 2. To optimize J rf with respect to the membership matrix, we use the Lagrange multiplier technique to obtain the following updated equation:

u ij = 1 k = 1 c ρ x i c j 2 ρ x i c k 2 1 m 1
(12)

To minimize J rf with respect to the prototype vector, we fix the membership matrix and set the gradient to zero

c j = i = 1 n u ij m ψ x i c j 2 x i i = 1 n u ij m ψ x i c j 2
(13)

3.2 Color robust c-prototypes

The above stated algorithm lets us segment grayscale images, but this idea could be extended on any color space. We extend this proposal to segment color images using different color spaces, but this extension is not immediate. For instance, the RGB color space requires to compute U f and C j , 28 28 28 times, this is equivalent to run the classical FCM algorithm in an image of 4,096 × 4,096 pixels. Since this is computationally intractable in practice, it is necessary to use a color space quantization into q h bins where h is the channel index [12]. Most of the color spaces use three channels (i.e., RGB, HSV, CIELab, and IJK) to describe each pixel in an image, so that we have to define q 1, q 2, and q 3 and we consider that each color component is divided into the same number of bins q 1 = q 2 = q 3 = q.

In this paper, we propose to apply the proposed color robust c-prototypes (ColorRc P) clustering algorithm in the JK chromatic subspace of the IJK color space. This color space provides that a convenient representation should yield distances and provide independence between chromatic and achromatic components [24].

The best know system for color representation is the RGB (red, green, blue) whose components have values in the interval [0,255]. A color specification system based on luminosity, saturation, and hue form a perceptual representation system that is obtained by some transforms applied to the RGB system [19].

Every color q from the RGB system is described by a vector having three scalar components q = (R, G, B). Let z 1, z 2, and z 3 be three real and positive numbers verifying the equality z 1 + z 2 + z 3 = 1. For q 1 = (R 1, G 1, B 1) and q 2 = (R 2, G 2, B 2) the following scalar product is defined in the RGB space as follows:

q 1 , q 2 = z 1 R 1 R 2 + z 2 G 1 G 2 + z 3 B 1 B 2 where q = z 1 R 2 + z 2 G 2 + z 3 B 2
(14)

Let there be the following basis in the RGB space: f 1 = (1, 1, 1), f 2 = (1, 0, 0), and f 3 = (0, 1, −1). Using the Gram-Schmidt procedure, it results the following orthonormal basis: e 1 = (1, 1, 1), e 2 = 1 z 1 z 1 , z 1 1 z 1 , z 1 1 z 1 , and e 3 = 0 , z 3 z 2 1 z 1 , z 2 z 3 1 z 1 . The coordinates I, J, K for the IJK orthonormal coordinates system in the basis e 1, e 2, e 3 are computed by I = 〈q, e 1〉 = z 1 R + z 2 G + z 3 B, = q , e 2 = z 1 1 z 1 R z 2 G + z 3 B z 2 + z 3 , and K = q , e 3 = z 2 z 3 1 z 1 G B , where z 1, z 2, and z 3 can be taken as constants with a value of 1 3 [23]. As a result, the components of the IJK space can be computed as I = R + G + B 3 , J = 2 R G B 3 2 , and

K = G B 6 .

In order to avoid the influence of non-uniform illumination, we apply the clustering procedure only for the JK chromatic subspace. By using the color quantification q stated above, we define q 1 and q 2 corresponding to the J and K color components. Then, the objective function to minimize is,

J r JK X ; U f , C = q 1 = 1 n q 2 = 1 n j = 1 c u q 1 j , q 2 j m ρ X q 1 , q 2 c j 2
(15)

subject to q 1 = 1 n q 2 = 1 n u q 1 j , q 2 j < 0 , j {1, …, c} and j = 1 c u q 1 j , q 2 j < 1 , (q 1, q 2) {1, …, n}2.

Taking into account the consideration that each color component J and K are divided into the same number of bins q 1 = q 2 = q. We can rewrite the objective function (15) as

J r JK X ; U f , C = q = 1 n 2 j = 1 c u qj m ρ X q c j 2 .
(16)

Since the gradient of J r JK with respect to u qj and c j vanishes when reaching the local optimum, and taking into account the same conditions to minimize the objective function (16), it is easy to show that the optimal updating equations are given by

u qj = 1 l = 1 c ρ x q c j 2 ρ x q c l 2 1 m 1
(17)
c j = q = 1 n 2 u qj m ψ x q c j 2 x q q = 1 n 2 u qj m ψ x q c j 2
(18)

4 Results and comparisons

The proposed ColorRc P clustering algorithm is evaluated here, and its performance has been compared with the following FCM-based segmentation methods: the Hidden Markov random field FCM (HMRFFCM) [9], the histogram thresholding FCM (HTFCM) [10], the fuzzy hyper-prototype clustering (FHCS) [11], the quantized FCM (QFCM_S2) [12], the usual-initialization FCM (UFCM) [13], the color-clustering FCM (CFCM) [14], and the single point iterative weighted FCM (SWFCM) [15]. We also compare our proposal with other image segmentation techniques beyond FCM such as the penalized inverse expectation maximization (PIEM) which extracts the features for each pixel using the Gabor filter, and the classification of pixels in different regions is done by the expectation maximization (EM) algorithm [25] and the segmentation by clustering then labeling (SCLpost) which uses small homogeneous regions to adopt a 3D feature vector obtained through the color space, and then, the clustering is done by a hybrid approach that combines the mean-shift with a semi-supervised discriminant analysis algorithm [26].

When a set of real images are segmented, the criteria used to compare the segmentation performance of various algorithms were the Probabilistic Rand Index (PRI) for evaluation of the results obtained from the tested algorithm to a set of manually segmented images [27], the Variation of Information (VOI) for quantification of the loss of information and the gain between two clusters belonging to the lattice of possible partitions [28], the Global Consistency Error (GCE) for quantification to what extent a segmentation can be viewed as the refinement of the other [29], and the Boundary Displacement Error (BDE) for evaluation of the average displacement error of boundary pixels between two segmented images by computing the distance between the pixel and the closest pixel in the other segmentation [30]. The Kappa coefficient K ^ was used to summarize the segmentation performance when a remote sensing image is segmented [31]. The Kappa statistics considers a measure of the overall accuracy of the image classification and individual category accuracy as a means of actual agreement between classification and observation [32]. Many research works in image segmentation use these indexes to compare the performance of various methods, for this reason, we use these measures to evaluate our proposed algorithms [2730].

The PRI index is defined as follows [27]:

PRI S , G k = 2 N N 1 i , j , i < j p ij c ij 1 p ij 1 c ij ,
(19)

where N is the number of pixels, S is the segmentation provided by the tested algorithm, c ij is a Boolean function denoting if l i S = l i G k , p ij is the expected value of the Bernoulli distribution for the pixel pair, l i G k is the label pixel x i in the k th manually segmented image, and l i S is the label of pixel x i in the tested segmentation. The ground-truth set is defined as {G 1, G 2, …, G L }, where L is the number of manually segmented images. The PRI index is in the range [0, 1] where high values indicate a large similarity between the segmented images and the ground-truth; the VOI index [28] is

VOI S , G k = H S + H G k 2 I S , G k ,
(20)

where H = i = 1 c n i n log n i n is the entropy, n i being the number of points belonging to the i th cluster, I S , G k = i = 1 c j = 1 c n i , j n log n i n n j n is the mutual information between two clustering, and n i,j is the number of points in the intersection of cluster i of S and j of G k . The VOI is a distance, the smaller the VOI value is, the closer the segmentation obtained and the ground-truth are; the GCE index [29] is

GCE S , G k , x i = 1 n min i = 1 n E S , G k , x i , i = 1 n E G k , S , x i ,
(21)

where E S , G k , x i = R S , x i \ R G k , x i R S , x i is a measure of error at each pixel x i , |.| is the cardinality, \ is the set difference, and R(S,x i ) is the set of pixels corresponding to the region in segmentation S that contains the pixel x i . The better the segmentation S respect to the ground-truth is when the closer GCE is to zero; the BDE index [30] is

BDE S , G k = 1 2 D S G k + D G k S ,
(22)

where D S G k is a distance distribution signature obtained by adding the distances over all points of S; and the Kappa coefficient is computed from [31, 32]

K ^ = N c i = 1 ν x ii i = 1 ν x i + x + i N c 2 i = 1 v x i + x + i ,
(23)

where x ii are the diagonal entries of the confusion matrix, x i+ and x +i indicates the sum of row i and the sum of column i of the confusion matrix, respectively, N c is the number of elements in the confusion matrix, and ν = 9 is the length of the sample data in a current 3 × 3 window. A value of zero indicates no agreement, while a value of 1.0 shows perfect agreement between the classifier output and the reference data.

Figure 1 depicts a subset of real images from the Berkeley Segmentation Data Set 500 (BSD500) used in the tests. This subset includes a set of real-world color images along with their segmentation maps provided by different people. In these images, a number of difficult aspects come in the image segmentation process, such as natural noise, artifacts, and varying imaging conditions.

Figure 1
figure 1

Subset of real images from the Berkeley Segmentation Data Set 500 (BSD500). (a) 35070, (b) 42049, (c) 67079, (d) 80099, (e) 113044, (f) 118035, (g) 124084, (h) 135069, (i) 196073, (j) 198023, (k) 208001, and (l) 210088.

Table 1 presents the average performance criteria (PRI, VOI, GCE, and BDE) obtained by the proposed ColorRc P algorithm, as well as the comparative algorithms after segmenting all images. We note that the comparative HMRFFCM, HTFCM, and FHCS algorithms are based on the RGB color space [911], and the PIEM, QFCM_S2, and SCLpost algorithms are based on the CIELab color space [12, 25, 26]. As can be seen in Table 1, the proposed algorithm has better results than the comparative algorithms in its respective color space.

Table 1 Average performance calculated for all segmented images

The optimal parameters for the proposed ColorRc P algorithm are set to initialize randomly, m = 2 for fuzzifier, ϵ = 1E-6 for the tolerance parameter that controls the iteration of the algorithm as well as the quality of the clustering procedure, in our algorithm is the minimum error in the objective function used as a termination criterion, and the number of clusters c are chosen depending on the image to be segmented. The M-estimators used to control the influence of the atypical information are presented in Table 2 [22, 33, 34].

Table 2 M -estimators used in the proposed method

Some visual results are depicted from Figures 2,3,4, and 5. As can be seen in the images, the ColorRc P method can segment the images better and not distort the edges in comparison with other methods.

Figure 2
figure 2

Segmentations of image 35070 ( c =four regions). (a) HMRFFCM, (b) HTFCM, (c) FHCS, (d) SCLpost (λ = 200), (e) PIEM, (f) QFCM_S2, (g) ColorRc PSimple-Cut, (h) ColorRc PHampel, (i) ColorRc PGerman-McClure, (j) ColorRc PAsad, and (k) ColorRc PInsha.

Figure 3
figure 3

Segmentations of the image 113044 ( c =two regions). (a) HMRFFCM, (b) HTFCM, (c) FHCS, (d) SCLpost (λ = 200), (e) PIEM, (f) QFCM_S2, (g) ColorRc PSimple-Cut, (h) ColorRc PHampel, (i) ColorRc PGerman-McClure, (j) ColorRc PAsad, and (k) ColorRc PInsha.

Figure 4
figure 4

Segmentations of the image 196073 ( c =three regions). (a) HMRFFCM, (b) HTFCM, (c) FHCS, (d) SCLpost (λ = 200), (e) PIEM, (f) QFCM_S2, (g) ColorRc PSimple-Cut, (h) ColorRc PHampel, (i) ColorRc PGerman-McClure, (j) ColorRc PAsad, and (k) ColorRc PInsha.

Figure 5
figure 5

Segmentations of the image 210088 ( c =three regions). (a) HMRFFCM, (b) HTFCM, (c) FHCS, (d) SCLpost (λ = 200), (e) PIEM, (f) QFCM_S2, (g) ColorRc PSimple-Cut, (h) ColorRc PHampel, (i) ColorRc PGerman-McClure, (j) ColorRc PAsad, and (k) ColorRc PInsha.

To demonstrate the performance of the proposed ColorRc P clustering scheme in real applications, a sub-region from the remote sensing image ‘Zhalong Nature Reserve’ (see Figure 6a) is segmented [15]. The proposed algorithm is compared with other approaches designed to segment this kind of images, such as the UFCM, CFCM, and SWFCM algorithms [1315].

Figure 6
figure 6

Segmentation results on a remote sensing image in c =eight regions. (a) sub-region of Zhalong Nature Reserve on October 21, 2001 [33], (b) UFCM, (c) CFCM, (d) SWFCM, (e) ColorRc PSimple-Cut, (f) ColorRc PHampel, (g) ColorRc PGerman-McClure, (h) ColorRc PAsad, and (i) ColorRc PInsha.

As in the first test, the parameters for the ColorRc P algorithm were set as initialized randomly, m = 2 for fuzzifier and a termination criterion of ϵ = 1E-6 based on the minimum error and in the objective function is calculated in each iteration. The remote sensing image was segmented in c = eight regions by using the different M-estimators presented in Table 2.

Table 3 shows the performance segmentation of different algorithms in terms of Kappa coefficient where one can see that the proposed method has a superior value than the other algorithms used as comparative. From Figure 6, the best visual results are provided by the proposed method with different M-estimators with a better preservation of edges and with more homogenous regions.

Table 3 Comparative results in terms of Kappa coefficient

5 Conclusions

This paper presents a method to segment color images using the IJK color space; it is based on the robust c-prototypes algorithm, but adapted to use different M-estimators. As could be seen in the quantitative results, the performance of all variants presented offered better results than other algorithms proposed in the recent literature. On the other hand, the visual results depicted that in the segmented images, there are detailed preservation and homogenous regions in comparison with other algorithms.

References

  1. Plataniotis K, Lukac R: Color Image Processing: Methods and Applications. Boca Raton: CRC Press; 2007.

    Google Scholar 

  2. Jain AK, Murty MN, Flynn PJ: Data clustering: a review. ACM Comput. Surv. 1999, 31(3):264-323. 10.1145/331499.331504

    Article  Google Scholar 

  3. Bezdek JC: Pattern Recognition with Fuzzy Objective Function Algorithms. New York: Plenum Press; 1981.

    Book  Google Scholar 

  4. Chuang KS, Tzeng HL, Chen S, Wu J, Chen TJ: Fuzzy c-means clustering with spatial information for image segmentation. Comput. Med. Imaging Graph. 2006, 30(1):9-15. 10.1016/j.compmedimag.2005.10.001

    Article  Google Scholar 

  5. Ahmed MN, Yamany SM, Farag AA: A modified fuzzy c-means algorithm for bias field estimation and segmentation of MRI data. IEEE Trans. Med. Imaging 2002, 21(3):193-199. 10.1109/42.996338

    Article  Google Scholar 

  6. Chen S, Zhang D, Cai W: Fast and robust fuzzy c-means clustering algorithms incorporating local information for image segmentation. Pattern Recogn. 2007, 40: 825-838. 10.1016/j.patcog.2006.07.011

    Article  Google Scholar 

  7. Zhao F, Jiao L, Liu H, Gao X: A novel fuzzy clustering algorithm with non local adaptive spatial constraint for image segmentation. Signal Process. 2011, 91(4):988-999. 10.1016/j.sigpro.2010.10.001

    Article  Google Scholar 

  8. Liu J, Pham T: Fuzzy hyper-prototype clustering. In KES 2010, Cardiff, 8-10 September 2010. Lecture Notes in Computer Science, 6276. Berlin Heidelberg: Springer; 2010:379-389.

    Google Scholar 

  9. Chatzis SP, Varvarigou TA: A fuzzy clustering approach toward hidden Markov random field models for enhanced spatially constrained image segmentation. IEEE Trans. Fuzzy Syst. 2008, 16(5):1351-1361.

    Article  Google Scholar 

  10. Siang K, Ashidi N: Mat Isa, Color image segmentation using histogram thresholding–fuzzy c-means hybrid approach. Pattern Recogn. 2011, 44(1):1-15. 10.1016/j.patcog.2010.07.013

    Article  Google Scholar 

  11. Liu J, Pham TD: A spatially constrained fuzzy hyper-prototype clustering algorithm. Pattern Recogn. 2012, 45(4):1759-1771. 10.1016/j.patcog.2011.11.001

    Article  Google Scholar 

  12. Le Capitaine H, Frélicot C: A fast fuzzy c-means algorithm for color image segmentation. In EUSFLAT-LFA 2011, Aix-les-Bains, France, 18-22 July 2011. Paris: Atlantis; 2011:1074-1081.

    Google Scholar 

  13. Pena JM, Lozano JA, Larranaga P: An empirical comparison of four initialization methods for the K- means algorithm. Pattern Recogn. Lett. 1999, 20(10):1027-1040. 10.1016/S0167-8655(99)00069-0

    Article  Google Scholar 

  14. Dae-Won K, Wang HL, Doheon L: A novel initialization scheme for the fuzzy c-means algorithm for color clustering. Pattern Recogn. Lett. 2004, 25(2):227-237. 10.1016/j.patrec.2003.10.004

    Article  Google Scholar 

  15. Fan J, Han M, Wang J: Single point iterative weighted fuzzy C-means clustering algorithm for remote sensing image segmentation. Pattern Recogn. 2009, 42(11):2527-2540. 10.1016/j.patcog.2009.04.013

    Article  Google Scholar 

  16. de Oliveira JVV, Pedrycz W: Advances in Fuzzy Clustering and its Applications. New York: Wiley; 2007.

    Book  Google Scholar 

  17. Frigui H, Krishnapuram R: A robust algorithm for automatic extraction of an unknown number of clusters from noisy data. Pattern Recogn. Lett. 1996, 17(12):1223-1232. 10.1016/0167-8655(96)00080-3

    Article  Google Scholar 

  18. Venetsanopoulos AN, Pitas I: Nonlinear Digital Filters. Boston: Kluwer Academic Publishers; 1990.

    Google Scholar 

  19. Pătraşcu V: Fuzzy image segmentation based on triangular function and its n-dimensional extension. Stud. Fuzziness Soft Comput. 2007, 210: 187-207. 10.1007/978-3-540-38233-1_7

    Article  Google Scholar 

  20. Castillejos H, Ponomaryov V, Nino-de-Rivera L, Golikov V: Wavelet transform fuzzy algorithms for dermoscopic image segmentation. Comput. Math. Methods Med. 2012. DOI:10.1155/2012/578721

    Google Scholar 

  21. Gallegos-Funes FJ, Ponomaryov V: Real-time image filtering scheme based on robust estimators in presence of impulsive noise. R. Time Imag. 2004, 8(2):78-90.

    Google Scholar 

  22. Astola J, Kousmanen P: Fundamentals of Nonlinear Digital Filtering. Boca Raton-New York: CRC Press; 1997.

    Google Scholar 

  23. Ronchetti EM, Hampel FR, Rouseew PJW, Stahel A: Robust Statistics: the approach based on influence function. New York: Wiley; 1986.

    Google Scholar 

  24. Angulo J, Sierra J: Color Segmentation by ordered mergings. In IEEE Int. Conf. on Image Processing, ICIP 2003, Barcelona, 14-17 September 2003. Piscataway: IEEE; 2003:125-128.

    Google Scholar 

  25. Khan J, Adhami R, Bhuiyan S, Sobieranski A: A customized Gabor filter for unsupervised color image segmentation. Image Vis. Comput. 2009, 27(4):489-501. 10.1016/j.imavis.2008.07.001

    Article  Google Scholar 

  26. Huang R, Sang N, Lou D, Tang Q: Image segmentation via coherent clustering in L*a*b* color space. Pattern Recogn. Lett. 2011, 32(7):891-902. 10.1016/j.patrec.2011.01.013

    Article  Google Scholar 

  27. Unnikrishnan R, Pantofaru C, Hebert M: Toward objective evaluation of image segmentation algorithms. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29(6):929-944.

    Article  Google Scholar 

  28. Meilă M: Comparing clusterings: an axiomatic view. In Proceedings of the 22nd Int. Conf. on Machine Learning, ICML05, Bonn, 7-11 August 2005. New York: ACM; 2005:577-584.

    Google Scholar 

  29. Martin D, Fowlkes C, Tal D, Malik J: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Proceedings of the 8th Int. Conf. Computer Vision, ICCV 2001, Vancouver, 7-14, July 2001, vol. 2. Piscataway: IEEE; 2001:416-423.

    Google Scholar 

  30. Freixenet J, Munoz X, Raba D, Marti J, Cuff X: Yet another survey on image segmentation: region and boundary information integration. In ECCV 2002, Copenhagen, 27 May - 2 June 2002. Lecture Notes in Computer Science, 2352. Berlin Heidelberg: Springer; 2002:408-422.

    Google Scholar 

  31. Mather P, Koch M: Computer Processing of Remotely-Sensed Images: An Introduction. Hoboken: Wiley-Blackwell; 2011.

    Book  Google Scholar 

  32. Okeke F, Karnieli A: Methods for fuzzy classification and accuracy assessment of historical aerial photographs for vegetation change analyses. Part I: algorithm development. Int. J. Remote Sens. 2006, 27(1–2):153-176.

    Article  Google Scholar 

  33. Ali A, Qadir M: A modified m -estimator for detection of outliers. Pak. J. Stat. Oper. Res. 2005, 1: 49-64.

    Article  Google Scholar 

  34. Ali A, Ullah I: Insha's redescending m -estimator for robust regression. Pak. J. Stat. Oper. Res. 2006, 2: 135-144.

    Article  Google Scholar 

Download references

Acknowledgements

The authors thank the Instituto Politécnico Nacional de México (National Polytechnic Institute of Mexico) and CONACYT for their financial support.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Francisco J Gallegos-Funes.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Mújica-Vargas, D., Gallegos-Funes, F.J., Rosales-Silva, A.J. et al. Robust c-prototypes algorithms for color image segmentation. J Image Video Proc 2013, 63 (2013). https://doi.org/10.1186/1687-5281-2013-63

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-5281-2013-63

Keywords