- Research
- Open access
- Published:
Modeling sets of unordered points using highly eccentric ellipses
EURASIP Journal on Advances in Signal Processing volume 2014, Article number: 11 (2014)
Abstract
An algorithm for modeling a set of unordered two-dimensional points by line segments is presented. The points are modeled by highly eccentric ellipses, and line segments are extracted by the major axes of these elongated ellipses. At first, a single ellipse is fitted to points which is then iteratively split to a large number of highly eccentric ellipses to cover the set of points. Then, a merge process follows in order to combine neighboring ellipses with almost collinear major axes to reduce the complexity of the model. Experimental results on various databases show that the proposed scheme is an efficient technique for modeling unordered sets of points and shapes by line segments. A computer vision application of the method is also presented regarding the detection of retinal fundus image features, such as end-points, junctions, and crossovers.
1 Introduction
In many computer vision applications, at a mid-level process, it is common to fit line segments in order to model a set of unordered points so as to summarize higher level features. For example, the detection of vanishing points [1], the vectorization of raster images [2], and the detection of road structures and parts [3] are among applications necessitating line segment description of image structures. In many of the aforementioned problems, the involved algorithms assume that they are provided with an ordered point set and standard polygonal approximation [4, 5] is then applied. However, determining the ordering of point sets is not a trivial task, and in the method described herein, we relax this assumption by making no prior hypothesis about the ordering of the points.
In the above context, the Hough transform (HT) is a widely used method for line fitting, and many variants have been proposed to improve its efficiency [6, 7]. One of these variants is the randomized Hough transform (RHT) [8, 9] which randomly selects a number of pixels from the input image and maps them into one point in the parameter space which was shown to be less complex, compared to the original algorithm, as far as time and storage issues are concerned. In [10], the probabilistic HT was proposed whose basic idea is to apply a random sampling of edge points to reduce computational complexity and execution time. Further improvements were introduced in [11]. A similar concept was proposed in [12], where an orientation-based strategy was adopted to filter out inappropriate edge pixels, before performing the standard HT line detection which improves the randomized detection process. Also, the idea of fuzziness is integrated in the main algorithm in [13] to model the uncertainty imposed to the contour due to noise. Thus, a point can contribute to more than one bin in the standard HT process. A general comparison between probabilistic and non-probabilistic HT variants can be found in [14].
The robust HT is introduced in [15] where both the length and the end-points of the lines may be computed. Moreover, the algorithm in [16] provides a method for adopting a shape-dependent voting scheme for the calculation of the histogram bins. Finally, a novel HT based on the eliminating particle swarm optimization (EPSO) is proposed in [17], to improve the execution time of the algorithm. The problem parameters are considered to be the particle positions and the EPSO algorithm searches the optimum solution by eliminating the ‘weakest’ particles, to speed up the process.
Line segment fitting may also be used in a shape description process. The commonly used algorithm of Moore [18] was a first solution to shape following. However, this algorithm is appropriate only for traversing curves without intersections and produces models with high complexity, although improvements of the main algorithm have also been considered up to date [19]. Another common model fitting method is the RANSAC algorithm [20], which despite the fact that it provides robust estimations, it is appropriate for fitting only one model at a time. Other approaches are the incremental line fitting [21] which is sensitive to noise and, most importantly, needs sequential ordering of the points and probabilistic methods [22] based on the Expectation-Maximization algorithm, generally necessitating the prior determination of the number of model components.
In this paper, we propose a method to model a set of unordered points by line segments. The result of the method is a sequence of straight line segments modeling the points, which correspond to the major axes of highly eccentric ellipses fitted on them. We call the procedure Direct Split-and-Merge algorithm or DSaM. The points may correspond to the locations of edge pixels. The basic idea relies on a split-and-merge process, where the algorithm is initialized by one ellipse, representing the mean and the covariance of the initial point set, which is then split, through an iterative scheme, until a number of small and elongated ellipses occurs. Then, a merge process takes place to combine the resulting ellipses in order to reduce the complexity of the representation. At the end, the algorithm provides a relatively small number of elongated ellipses fitted on the points. The proposed method is general and may also model a set of points containing inner structures, such as joints. Numerical experiments on common shape databases are presented to underpin the performance of the proposed method. Also, edgemaps of natural images were used in our experiments to verify the efficiency of the method on real data. The number of ellipses depends on the structure of points, and it is determined by two parameters controlling both the split and the merge processes. The extension of the method to a three-dimensional (3D) case is trivial, as the elongated ellipses may be transformed to 3D ellipsoids. In the 3D case, our rationale is to model the points by highly eccentric ellipsoids, thus, instead of line segments, multiple 2D planes fit to points. In that case, data from range images could be modeled. A preliminary version of this method was introduced in [23]. Note that our method is not affected by degradations in the contour, and thus, it is more reliable compared to the distance transform [24] or the medial axis transformation [25], as it assumes more relaxed initial conditions (a set of unordered points) compared to conventional shape description methods, where point traversal should be known a priori.
Moreover, we present some applications that could benefit by our DSaM method. To that end, an efficient algorithm for retinal image analysis based on the proposed modeling framework is presented. More precisely, the purpose is to parse the image representing the retinal vessels and extract their characteristic points (end-points, junctions, and crossovers).
The remainder of the paper is organized as follows: In Section 2, the split-and-merge algorithm is presented. Experimental results using various databases (MPEG7 [26], GatorBait100 [27], ETHZ [28], Brown [29]) that contain either object silhouettes or collections of real images are shown in Section 3. In the same section, a comparison of the proposed method with a variant of the Hough transform, namely, the progressive probabilistic Hough transform (PPHT) is demonstrated [30]. In Section 4, we present an applications of our method: an algorithm for retinal image analysis is presented along with its evaluation using the DRIVE database [31–33].
2 Direct split-and-merge method
Let X = {x i |i = 1,…,N} be a set of points and E = {ε j |j = 1,…,K} be the set of line segments modeling the points, where ε i is the line describing the i th segment.
We define the modeling error Δ induced by the representation of line segments:
where K is the number of line segments the model uses to model the points, , i = 1,…,N are the points, d(x i ,ε j ) is the perpendicular distance of point x i to line ε j , δ i j is an indicator function whose value is one if point x i belongs to line segment ε j and is zero otherwise.
In order to prevent overfitting, models having a large number of line segments should be penalized. Therefore, an optimal model would have both low value of Δ and low complexity.
The computation of the ellipses, modeling the line segments, is performed in two steps: an iterative split process, where points are modeled by a number of line segments represented by the major axes of the corresponding ellipses and an iterative merge process, where small line segments are merged to reduce the model complexity. The split process tries to minimize the modeling error while the merge process decreases the model complexity, i.e., the number of line segments compared to the total number of points in the set.
What follows are the two steps presented in detail.
2.1 Split process
The ultimate goal of this step is to cover the point space with line segments representing the long axes of elongated ellipses and therefore, each point of the shape should be assigned to an eccentric ellipse. A split criterion is defined, based on Gestalt theory [34], which models the linearity and the connectivity the human brain uses when modeling contours.
In order to split a set X, it should be either non-linear or disconnected, or both. Linearity describes how close the points are to a straight line, while disconnectivity measures how concentrated these points are. In the ideal case, the covariance matrix of collinear 2D points should have a very large eigenvalue and a zero eigenvalue. The eigenvector corresponding to the larger eigenvalue indicates the direction of the line segment. If the linearity property is relaxed, the less collinear the points become (i.e., they diverge from the linear assumption) the larger the value of the minimum eigenvalue is. Based on that observation, in our method, linearity is described by the minimum eigenvalue of the covariance matrix of the points in X. Also, the disconnectivity W of two sets of points X, Y is the smallest distance between a point in X and a point in Y:
In the case of a single set, disconnectivity is the largest distance between two successive points in that set. It may be computed by projecting the points onto both axes defined by the eigenvectors of the covariance matrix of the set. Then, successive points are defined by scanning along the axes and their distances are computed. Let X i be the projection of a set X onto the the eigenvector e i . The disconnectivity of X is defined as
where N is the number of points in X, d is the dimension of X (here d = 2), and is the j th point of the sorted set X i . A large value of disconnectivity indicates a better separation of the point sets. The projections onto all of the eigenvectors should be examined as we do not know a priori which direction to follow while splitting. Although intuitively one would suggest to split along the direction of the principal axis, we observed that in many cases that approach was not the best. Also, let us note that as the ordering of the points is not known a priori, their projection onto the eigenvectors of their covariance matrix, provides a natural way of ordering.
The disconectivity of a single set of points is also important to be estimated in the split step, as there may exist subsets that although they are linear, they are disconnected, as it is demonstrated in Figure 1.
The split of an ellipse should be performed along the direction defined by an eigenvector of its covariance. In order to select the split direction, the axis corresponding to an eigenvector is considered as the discrimination border between the split line clusters and points belonging to the same subplane are grouped together. Then, the disconnectivity of each line cluster is computed. Finally, the direction with the largest disconnectivity is selected for splitting (Figure 2).
Eventually, the adopted strategy that minimizes Δ and prefers elongated ellipses can be expressed as follows: split every ellipse whose minimum eigenvalue is greater than a threshold T1 (linearity) and the maximum gap, within the current segment is greater than a threshold T2 (disconnectivity). The process is initialized with one ellipse, corresponding to the covariance of the initial points set centered at the mean value of the point locations. Thresholds T1 and T2 may be computed with a heuristic algorithm, as explained in Section 3.1.
At iteration t + 1, a given ellipse, characterized by the eigenvalues and of its covariance matrix Σt (with ), with center μt, is split to two new ellipses with centers the two antipodal points on the major axis:
where et, λt are the eigenvector and the eigenvalue corresponding to the split direction along which split is performed (Figure 2).
The points of the split ellipse are then reassigned to the two new ellipses according to the nearest neighbor rule. In this way, new ellipses occur, which are more elongated as they have greater eccentricity and their minor axes are closer to the contour (Figure 3). Moreover, this detailed representation of the point set provides accurate modeling of the joints, corners, and parts of the contour exhibiting high curvature.
A variant of the method would be to compute the covariance matrix of the points on the convex hull of the point set, which provide more robustness to outliers.
2.2 Merge process
The role of the merge process is to reduce the complexity of the model. In case there exist adjacent ellipses whose major axes have similar orientations, it would be beneficial to merge and replace them with a more elongated ellipse. Therefore, in this step, ellipses are merged using the following rule: merge two consecutive ellipses, if the resulting ellipse has minimum eigenvalue smaller than a threshold T1 (linearity) and the marginal width between the two line clusters is smaller than a threshold T2 (disconnectivity).
Note that the threshold T1 could be set equal to the threshold used in the split process, where the value of parameter T1 specifies whether an ellipse has low eccentricity and needs to be split. In the merge process, it indicates whether two candidates for merging ellipses would result in an ellipse with high eccentricity. One could use the same threshold in both processes, assuming the same significance. On the other hand, a relaxation of the merge threshold could lead to a rougher model of the points, smoothing out details like joints. In our experiments, the merge threshold was selected to be the same with the split threshold. The same applies for threshold T2 that indicates whether two segments are close enough to be considered as one line segment.
The overall description of the method is presented in Algorithm 1.
Algorithm 1 Direct Split-and-Merge Algorithm
3 Experimental results and discussion
In this section we evaluate the efficiency of the introduced algorithm. To that end, two categories of experiments were conducted. The purpose was to investigate the performance of the method both in shape data, but also in real images. Thus, various well-known databases were employed, that contain either object silhouettes or scenes of real images. The GatorBait100 database [27] consists of 38 shapes of different fishes grouped in 8 categories. The shapes of this database are not closed and contain many junctions. The MPEG7 shape database [26] consists of 1400 silhouettes of various objects clustered in 70 categories.The shape silhouette database used in [29], that contains 137 silhouettes of various objects, clustered in 13 categories, was also used in our experiments. Finally, to investigate the behavior of the proposed algorithm in real scene images, the images (257) from the ETHZ image set [28] were also used. Table 1 gives a brief description of each database. In all cases, the edges were extracted and the coordinates of the edge pixels were used to describe the contour. The Canny edge detector [35] was used in all cases.
3.1 Numerical evaluation
In this section, we present the resultsa of comparing the DSaM method with the widely used implementation of Kovesi [36]. Tables 2 and 3 summarize the numerical results, while Table 4 demonstrates the execution time for the experiments on MPEG7 [26], GatorBait100 [27], Brown [29], and ETHZ [28] datasets. Some representative images from those databases are given in Figure 4. As it can be observed, in some cases, there exist inner structures and thus, the ordering of the points is not obvious. Note that to share common parameters, in the Kovesi [36] implementation, we used the disconnectivity threshold of our method. The execution time for computing that value is not included in the execution time of the Kovesi implementation. The model complexity is computed by the index:
Lower values of MC imply lower complexity and therefore a more compact representation.
The distortion is the measure of the quality of the fitting and is computed as the average distance between a point and its corresponding line segment, as computed by each method. Please note that the average length of the diagonal of the bounding box of the various datasets is about 500 units (ranging from 300 units in Brown [29] to 700 units in ETHZ [28]).
In the proposed algorithm, there are two parameters to be a priori specified, a threshold that determines the elongation of an ellipse (T1) and a threshold characterizing the disconnectivity of a set (T2). Both parameters are used to decide whether to split (in the split process) or merge (in the merge process). A small value preserves the details, while a larger one provides more coarse results. For our experiments, we computed the values of the parameters as:
where N is the number of the points of the set, λ i is the smallest eigenvalue of the covariance matrix of points , with being the α- neighborhood of x, and
Large values for α decrease the model complexity providing larger modeling error and details are not preserved. In our experiments, we set α = ⌈0.01 × N⌉ for computing the values of T1 and T2. To make our implementation more efficient, instead of taking all points into consideration, we computed a random permutation of the indices of points and used only the first 10% of them. Thus, in high density datasets, like in the ETHZ database [28], the values of the thresholds could be estimated quickly.
In general, the DSaM method and the Kovesi implementation produce models with similar complexity, a fact that is obvious, since they employ the same thresholds. However, the DSaM method provides much more accurate results w.r.t distortion (Tables 2 and 3). Concerning the execution time of the methods, one may observe that for simple datasets, like the object silhouettes [26, 29], both methods are quite fast. However, for more complex datasets, like the ETHZ [28], DSaM is much faster (up to about eight times) than the Kovesi implementation, since these datasets, contain a lot of points and Kovesi implementation has to check them all, one by one.
As our method models the line segments with ellipses, we tried to fit line segments by exploiting various modifications of a typical Gaussian Mixture Model (GMM) [37], for example, using an incremental GMM, or imposing constraints in the update step of the covariance matrices (decomposing the covariance matrices with SVD, replacing the corresponding minimum eigenvalue with a very small value, threshold T1, and then recomputing the covariance matrix). All these variants failed to produce an efficient result. Moreover, the execution time was quite high (ten times compared to those of DSaM). Thus, we opted for excluding the results from this presentation.
Finally, we conducted experiments to verify the robustness of the method against the presence of noise that degrades the contour of an object. To that end, we used three patterns (see Figure 5a,b,c) which were randomly repeated to create new images. A representative image is given in Figure 6. As a ground truth for the number of line segments, we used 4 for the square, 3 for triangle, and 10 for the star.
The set of unordered points was produced by simple edge detection. Then, zero mean Gaussian noise with varying standard deviation was added in order to get several configurations of signal-to-noise ratio (SNR). A representative result of a degraded contour is given in Figure 5d. Note that no ordering of points may be established in that case and thus polygon approximation may not be performed. To make the experiment independent from the noise configuration, each experiment was repeated 20 times. The algorithm assumes that a form of binary data (e.g., an edge map) is provided. Degradation by noise is performed after the edge extraction in order to examine the behavior of the algorithm to the detection of line segments. If the noise was added to the original image, the edges would be erroneous and we would not have a standard baseline for evaluating the algorithm.
In Figure 7, we present the results of the experiments. The error is expressed as the absolute difference between the real number of segments and the one computed by our method. It can be observed that while the magnitude of the noise decreases, the error is also decreased. The difference between true and estimated number of segments is generally small, 3 on average with low variance (±2 segments), compared to the total number of line segments, 90 line segments on average, corresponding to 3% deviation between true and estimated measurement. Thus, it could be claimed that the proposed method exhibits a consistent and efficient performance even if the contour is corrupted.
3.2 Comparison with the Hough transform
Since the proposed algorithm fits line segments to a set of points, we also tested it against the commonly used Hough Transform (HT). However, since the standard HT is appropriate for fitting lines and not line segments, we applied the Progressive Probabilistic Hough Transform (PPHT), as proposed in [30] and implemented in the OpenCV library [38]. The implementation of PPHT imposes three parameters: (1) a threshold, indicating the minimum number of points in a bin, in the line parameter space, in order to consider that the line is represented by a sufficient number of points, (2) the minimum length of a line segment, and (3) the maximum gap between line segments lying on the same axis. In our experiments, we fixed the last two parameters (after a trial and error procedure keeping those parameters that best fit the examined points) and varied the threshold. The obtained results for the PPHT exhibited significant irregularities such as a large number of overlapping lines for the same segment. Also, the corners of the shapes were not correctly captured. Representative experiments on the MPEG7 dataset [26] are shown in Figure 8a,b,c while the solution of our DSaM algorithm is illustrated in Figure 8(d).
The PPHT is based on a histogram which correlates the accuracy of the result with the number of bins used. Also, a threshold must be established to eliminate lines with small participation in the final result. A small number of bins may lead to an underestimation of the number of segments, while a large number of bins increases the complexity of the model. As far as the threshold is concerned, its value may have similar effects in the final model. A large value may drop some segments, while a small value may be responsible for a large number of lines fitted, analogous to a GMM with one component per point. A more important drawback of the PPHT is that many overlapping lines may model the same line segment. Figure 8 presents solutions of PPHT for a given set of points and various parameters values.
4 Retinal fundus image feature characterization
In this section, we introduce an application of the proposed method. Note however that this is only an indicative presentation, as one could also use our algorithm for solving other problems that need the modeling of line segments, such as the extraction of road layers from raster maps [39], where the standard Hough Transform is used to detect lines in some part of their algorithm. Another application is the detection of the vanishing point in an indoor scene [40].
A methodology that extracts features from the retinal fundus image and characterizes them will be presented. The goal is to detect the intersection points between the vessels, as they could provide useful information to an automatic diagnosis system.
The detection and characterization of various topological features of the retinal vessels are an important step in retinal image processing algorithms within an autonomous diagnosis system. A deviation from common topological feature patterns may be an indicator of anomaly. A comprehensive study may be found in [41]. In a typical retinal vessel structure, more than 100 junctions may be present [32], a fact that makes the manual characterization a tedious and time-consuming task. Typical retinal features are presented in Figure 9.
For our algorithm, three types of features are detected: end-points (points at the extremities of the vessels), interior-points (points along a vessel), junctions (a new vessel is a branch of a longer one - T-junctions or a vessel is split into two or more new vessels - bifurcation) and crossovers (one vessel passes over another). Please note that further processing is needed to distinguish between a crossover and a bifurcation.
To investigate the accuracy of the proposed algorithm, experiments were conducted on the DRIVE database [31], which includes 40 retinal images along with their manual extraction of the vessels. The ground truth used in [32], [33] was also employed. In our experiments, the manual segmentations were employed, as the scope of our algorithm is to detect junctions, crossovers, and end-points. The reader should refer to [42] or [43] for a detailed vessel extraction algorithm, which is a preprocessing step of the whole chain. At first, a Canny edge detector [35] is applied to extract the borders of the vessels and then a thinning algorithm [44] is used, to extract the center line of the vessels. In Figure 10a, the original image is shown. Figure 10b presents the manual segmentation, while the data used in our retinal parsing algorithm are shown in Figure 10c.
Once the center line of each vessel is extracted, the method presented in Section 2 is applied in order to extract the corresponding line segments and assign each point to its corresponding line segment. For each line segment, its extreme points are the points that have the largest distance from the corresponding extreme points of the same line cluster.
In Figure 11, the points x (summarizing the vessel structure) are depicted with red and black color, while the corresponding extreme points y are presented with green and blue dots. A rule is defined to characterize a point as end-point or junction or crossover. Let be the index of the line cluster point where x belongs to. In Figure 11, two line clusters are shown ( and ). To define the neighborhood of extreme points, a neighborhood radius threshold is defined as
where is the mean distance between all the nearest neighbors and w is a constant that is learned, as explained later in this section. Thus, for an extreme point y, the neighborhood is the set of the points x such that ||x – y|| ≤ T n . Note that . In order to characterize point y, we define as the number of distinct line clusters that points belong to.
In the example shown in Figure 11, the studied extreme point y is the green one, while its neighborhood is defined by a circle centered at y with radius T n . Red and black points lying within that circle are the neighbors of y. Those points belong either to line cluster 1 or to line cluster 2 and thus . If all neighbors of y belong to the same line cluster with that of y, then y would be an end-point (). In case where , y would be a junction or a crossover. The algorithm in its current form does not discriminate between them.
A special case occurs when , where the studied point y is either an interior-point or a junction (T-junction). In that case, further elaboration is needed to characterize the extreme point by examining whether contains an extreme point or not. Thus, if y belongs also to the neighborhood of y′, with y′ denoting the neighbor of y, then y is an interior-point, otherwise it is a junction (T-junction). Please note that the neighboring relationship we are describing in that section is not reflective. For example in Figure 11, the green point, which is one of the extreme points of cluster 1, is neighbor to cluster 2 (as there are some points of cluster 2 within the yellow circle that defines the neighborhood of the green point). However, none of the extreme points of cluster 2 contain a point from cluster 1 in their neighborhood, and thus, cluster 1 is not neighboring to cluster 2.
In our example in Figure 11, this means that one of the two blue points would be inside the yellow circle. In case that contains no extreme point, as shown in Figure 11, then we compute the minimum distance (denoted by d in Figure 11) between the nearest neighbor (orange point in Figure 11) of y among the points of the other neighboring line cluster (black points in Figure 11) and the corresponding extreme points (blue points in Figure 11). If , then y is a (junction) (T-junction), otherwise it is an interior-point.
A detailed description of the rules used to characterize an extreme point y is presented in Algorithm 2.
Algorithm 2 Rules for vessel features characterization
Note that since the ground truth refers to the original vessels and not to their center lines, which is the input of our method, we determined a value Tconf that defines a confidence region around a ground truth point. A computed point is considered to match a ground truth point if it lies in its confidence region. In our experiments, Tconf is defined as a percentage (1%) of the length of the diagonal of the bounding box of points x. Figure 10d shows the confidence regions depicted as circles with a radius equal to Tconf. To establish a robust value for constant w ((9)), the precision and recall rates were computed for values of w between 1.8 and 4.0 with a step of 0.8. Then, the F measure (harmonic mean) was calculated as
where P is the precision and R is the recall:
where TP is the number of true positives, that is, the number of relevant items retrieved, FP is the number of false positives, that is, the number of irrelevant items retrieved and FN is the number of false negatives, that is, the number of relevant items not retrieved.
Figure 12 shows the plot of F measure for various values of parameter w in Equation (9). In our experiments, the F measure takes its maximum value for w = 2.9. The corresponding point is depicted with a black square in Figure 12. In that case, the corresponding precision is equal to 91.59% while the recall is 98.58%. The value of computed from our experimental data is approximately equal to , which leads to T n = 4.10, Equation (9), corresponding to a neighborhood radius equal to 4 pixels.
The mean execution time was 109 s for the extraction of the line segments and 12 s for the extraction and characterization of features using Matlab on a typical Dual Core 2 × 2.50 GHz machine with 2.0 GB RAM.
5 Conclusion
The problem of fitting several lines in order to describe a set of 2D unordered points is crucial in several computer vision applications. To that end, a split-and-merge algorithm was presented in this paper that iteratively fits a number of elongated ellipses to 2D points (DSaM). The points are represented by the major axes of the ellipses, while the minor axes account for possible perturbations from the linear modeling. Furthermore, a merge process combines neighboring ellipses with collinear major axes in order to reduce their number and consequently the complexity of the solution. An important aspect of the algorithm is that it does not require an ordering of the points and accurately handles contours or edge chains that contain joints and multiple structures.
The proposed method depends on two thresholds that control the flexibility of the model. Robust estimation of those parameters, in an adaptive scheme is an issue that need further elaboration and is a subject of ongoing research along with the application to other problems of methods [28, 45, 46] that could directly benefit from an effective solution to this problem.
Endnote
a Matlab code available at http://www.cs.uoi.gr/~dgerogia
References
Cantoni V, Lombardi L, Porta M, Sicard N: Vanishing point detection: representation analysis and new approaches. In 11th International Conference on Image Analysis and Processing (ICIAP). Palermo; 26–28 September 2001:90-94.
Dori D, Wenyin L: Sparse pixel vectorization: an algorithm and its performance evaluation. IEEE. Trans. Pattern. Anal. Mach. Intell 1999, 21(3):202-215. 10.1109/34.754586
Se S, Brady M: Road feature detection and estimation. Mach. Vis. Appl 2003, 14(3):157-165.
Ramer U: An iterative procedure for the polygonal approximation of plane curves. Comput. Graph. Image Process. 1972, 1: 244-256. 10.1016/S0146-664X(72)80017-0
Kolesnikov A: ISE-bounded polygonal approximation of digital curves. Pattern Recognit. Lett 2012, 33: 1329-1337. 10.1016/j.patrec.2012.02.015
Illingworth J, Kittler J: A survey of the Hough transform. Comput. Vis. Graph. Image Process 1988, 44(1):87-116. 10.1016/S0734-189X(88)80033-1
Leavers VF: Which Hough transform? CVGIP: Image Underst 1993, 58(2):250-264. 10.1006/ciun.1993.1041
Xu L, Oja E, Kultanen P: A new curve detection method: Randomized Hough Transform (RHT). Pattern Recognit. Lett 1990, 11: 331-338. 10.1016/0167-8655(90)90042-Z
McLaughlin RA: Randomized Hough transform: improved ellipse detection with comparison. Pattern Recognit. Lett 1998, 19: 299-305. 10.1016/S0167-8655(98)00010-5
Kiryati N, Eldar Y, Bruckstein AM: A probabilistic Hough transform. Pattern Recognit 1991, 24(4):303-316. 10.1016/0031-3203(91)90073-E
Matas J, Galambos C, Kittler J: Progressive probabilistic Hough transform. In British Machine Vision Conference (BMVC). London; September 1998:256-265.
Chung K, Lin Z, Huang S, Huang Y, Liao HM: New orientation-based elimination approach for accurate line-detection. Pattern Recognit. Lett 2010, 31(1):11-19. 10.1016/j.patrec.2009.09.013
Chatzis V, Pitas I: Fuzzy cell Hough transform for curve detection. Pattern Recognit 1997, 30(12):2031-2042. 10.1016/S0031-3203(97)00025-3
Kälviäinen H, Hirvonen P, Xu L, Oja E: Probabilistic and non-probabilistic Hough transforms: overview and comparisons. Image Vis. Comput 1995, 13(4):239-252. 10.1016/0262-8856(95)99713-B
Atiquzzaman M, Akhtar MW: A robust Hough transform technique for complete line segment description. Real Time Imaging 1995, 1: 419-426. 10.1006/rtim.1995.1043
Chau C, Siu W: Adaptive dual-point Hough transform for object recognition. Comput. Vis. Image Underst 2004, 96(1):1-16. 10.1016/j.cviu.2004.04.005
Cheng HD, Guo Y, Zhang Y: A novel Hough transform based on eliminating particle swarm optimization and its applications. Pattern Recognit 2009, 42(9):1959-1969. 10.1016/j.patcog.2008.11.028
Moore GA: Automatic scanning and computer process for the quantitative analysis of micrographs and equivalent subjects. In Pictorial Pattern Recognition. Washington, DC: Thompson Book Co,; 1968:275-362.
Li C, Wang Z, Li L: An improved Hough transform algorithm on straight line detection based on Freeman chain code. In 2nd International Congress on Image and Signal Processing (ICISP). Tianjin; 17–19 October 2009:1-4.
Fischler MA, Bolles RC: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24(6):381-395. 10.1145/358669.358692
Bhowmick P, Bhattacharya B: Fast polygonal approximation of digital curves using relaxed straightness properties. IEEE Trans. Pattern Anal. Mach. Intell 2007, 29(9):1590-1602.
Leite J, Hancock E: Iterative curve organisation with the EM algorithm. Pattern Recognit. Lett 1997, 18(2):143-155. 10.1016/S0167-8655(96)00119-5
Gerogiannis D, Nikou C, Lykas A: A split-and-merge framework for 2D shape summarization. In 7th International Symposium on Image and Signal Processing and Analysis (ISPA). Dubrovnik; 4–6 September 2011:206-211.
Azriel R, Pfaltz J: Sequential operations in digital picture processing. J. Assoc. Comput. Mach 1966, 13(4):471-494. 10.1145/321356.321357
Lee DT: Medial axis transformation of a planar shape. Pattern Anal. Mach. Intell., IEEE Trans 1982, 4(4):363-369. doi:10.1109/TPAMI.1982.4767267
Temple University, College of Science and Technology MPEG-7 dataset. . Accessed 27 January 2014. http://www.dabi.temple.edu/~shape/MPEG7/dataset.html MPEG-7 dataset. . Accessed 27 January 2014.
University of Florida, GatorBait 100 . Accessed 27 January 2014 http://www.cise.ufl.edu/~anand/GatorBait_100.tgz
Ferrari V, Tuytelaars T, Gool LV: Object detection by contour segment networks. In European Conference on Computer Vision. Berlin Heidelberg: Springer-Verlag,; 2006:14-28.
Sebastian TB, Klein PN, Kimia BB: Recognition of shapes by editing their shock graphs. IEEE Trans. Pattern Anal. Mach. Intell 2004, 26(5):550-571. 10.1109/TPAMI.2004.1273924
Matas J, Galambos C, Kittler J: Robust detection of lines using the progressive probabilistic Hough transform. Comput. Vis. Image Underst 2001, 78: 119-137.
Staal JJ, Abramoff MD, Niemeijer M, Viergever MA, van Ginneken B: Ridge based vessel segmentation in color images of the retina. IEEE Trans. Med. Imaging 2004, 23(4):501-509. 10.1109/TMI.2004.825627
Azzopardi G, Petkov N: Detection of retinal vascular bifurcations by trainable V4-like filters. In 14th International Conference on Computer Analysis of Images and Patterns (CAIP). Seville; 29–31 August 2011:451-459.
Azzopardi G, Petkov N: Retinal fundus images - Ground truth of vascular bifurcations and crossovers. . Accessed 27 January 2014. http://www.cs.rug.nl/~imaging/databases/retina_database/retinalfeatures_database.html
Marr D: Vision. New York: Freeman Publishers; 1982.
Canny J: A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell 1986, 8: 679-698.
Kovesi PD: MATLAB and Octave Functions for Computer Vision and Image Processing, Last Visited on September 3rd 2013. Centre for Exploration Targeting, School of Earth and Environment, The University of Western Australia. Available from: . Accessed 27 January 2014 http://www.csse.uwa.edu.au/~pk/research/matlabfns/ Available from: . Accessed 27 January 2014
Bishop CM: Pattern Recognition and Machine Learning. Dordrecht: Springer; 2006.
Bradski G, Kaehler A: Learning OpenCV: Computer Vision with the OpenCV. Sebastoupol: O’Reilly Media; 2008.
Chiang Y-Y, Knoblock CA: A method for automatically extracting road layers from raster maps. In 10th International Conference on Document Analysis and Recognition (ICDAR). Barcelona; 26–29 July 2009:838-842.
Gerogiannis D, Nikou C, Lykas A: Fast and efficient vanishing point detection in indoor images. In 21st International Conference on Pattern Recognition (ICPR). Tsukuba; 11–15 November 2012.
Patton N, Aslam TM, Gillivray MacT, Dearye IJ, Dhillonb B, Eikelboomf RH, Yogesana K, Constablea IJ: Retinal image analysis: concepts, applications and potential. Prog. Retin. Eye Res 2006, 25: 99-127. 10.1016/j.preteyeres.2005.07.001
Lemaitre C, Perdoch M, Rahmoune A, Matas J, Miteran J: Detection and matching of curvilinear structures. Pattern Recognit 2011, 44: 1514-1527. 10.1016/j.patcog.2011.01.005
Sofka M, Stewart CV: Retinal vessel extraction using multiscale matched filters, confidence and edge measures. IEEE Trans. Med. Imaging 2006, 25(12):1531-1546.
Gonzalez RC, Woods RE: Digital Image Processing. Upper Sddle River: Prentice Hall; 2008.
Ferrari V, Jurie F, Schmid C: From images to shape models for object detection. Int. J. Comput. Vis 2010, 87(3):284-303. 10.1007/s11263-009-0270-9
Tardif J-P: Non-iterative approach for fast and accurate vanishing point detection. In 12th International Conference on Computer Vision (ICCV). Kyoto; 27 September–04 October 2009:1250-1257.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Gerogiannis, D., Nikou, C. & Likas, A. Modeling sets of unordered points using highly eccentric ellipses. EURASIP J. Adv. Signal Process. 2014, 11 (2014). https://doi.org/10.1186/1687-6180-2014-11
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/1687-6180-2014-11