Nothing Special   »   [go: up one dir, main page]

Recognition of Lane Borders

Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/276353051

AUTOMATIC RECOGNITION OF HIGHWAY LANE BORDERS FROM LARGE


SCALE DIGITAL IMAGES

Article · June 2006

CITATIONS READS

0 24

1 author:

Mohamed Zahran
Benha University
22 PUBLICATIONS   27 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

The Nile Delta - Contributed Book Project published by Springer International Publishing in 2017 View project

All content following this page was uploaded by Mohamed Zahran on 18 May 2015.

The user has requested enhancement of the downloaded file.


AUTOMATIC RECOGNITION OF HIGHWAY LANE
BORDERS FROM LARGE SCALE DIGITAL IMAGES
Mohamed Ibrahim Zahran
Associate Professor of Surveying and Photogrammetry
Faculty of Engineering at Shoubra, Benha University

ABSTRACT

This research presents an approach of automatic extraction of the stripes that


signify the borders of highway lanes from large scale digital images. The
highway width and centerline can be resolved consequently. In this approach,
edge points yielded by employing canny edge operator are aggregated into edge
segments. Then, non-parametric figures are generated and only closed shapes
are extracted out of them. At last, the border stripes are recognized by applying
relevant shape measures to all identified closed figures. Two image patches,
extracted from a large-scale digital aerial image are used for the research
experimentation. Each of the two patches exhibit part of a highway network.
The results reached by applying the proposed approach have confirmed the
validity of utilizing appropriate shape measures in automatic recognition closed
figures.

Keywords: Edge Detection, Feature Extraction, Shape Analysis, Shape


Measures, Highway Networks.

1. INTRODUCTION

Recently, the automation of road extraction from digital imagery has become an
area of considerable interest. The desire to provide and update data for
geographic information systems (GIS) has moved forward the research on this
issue. For urban planning applications like traffic flow analysis and simulation,
estimation of air and noise pollution, street maintenance, etc., there is a high
demand for accurate and detailed information about the road network. A survey
on 3D City Models by the European Organization for Experimental

1
Photogrammetric Research revealed that most of the participants have stated that
information about the road network is of their greatest interest [4].

For a number of cities in Europe and Northern America, considerable parts of


the road network are already digitally mapped, mainly for purposes of
navigation and route planning. In reality, however, the application for which the
data is collected is the dominant factor influencing the level of detail of the data
acquisition. This in turn determines which elements are believed to be
significant and which are irrelevant. For example, for navigation purposes, the
topology of the road network is more important than the road description.
Therefore, road information provided in most existing area-wide road database
hold only the road axis and the road width. They are short of the details required
for other applications.

Aerial images are a valuable source that provides detailed information about
urban features, including roads and their elements such as lanes and road
markings. Thus, despite of existing GIS data, detailed road data can be extracted
with high accuracy by analyzing high resolution aerial images. Examples of road
data acquired from aerial images are number of lanes, number of vehicles, queue
lengths, and vehicle velocities necessary for estimation of traffic flow conditions
on high capacity roads. However, manual procedure required for extracting
desired information is expensive and time-consuming. Thus, there is a big
economic desire to automate the extraction of roads from aerial and satellite
imagery, and there is a lot of research in this field, too. Valuable attempts are
shown in [2,4,5,7,10]. Nevertheless, fully automatic road extraction is still an
unsolved problem.

Two different approaches for automatic road extraction have been developed
during the past years. The first approach makes use of various scales to detect
roads elements and employs local grouping criteria and also context
information. This approach is limited to gray scale imagery with high
resolutions, where roads appear as uniform regions. The second approach deals
with the connectivity of roads utilizing the information derived from multi-
spectral satellite imagery. Here, roads are modeled as lines. Also, lines detected
in multiple channels are merged together. The two approaches can be combined
to take advantage of their strengths and to support overcoming their
shortcomings [1].

2. EXTRACTION OF LINEAR FEATURES

Lines and edges are important objects for many applications. They occur as the
low-level primitives of choice to extract from images. Roads, railroads, and
rivers, for example can be regarded as lines in aerial or satellite images. The

2
extraction of those types of objects is very crucial for the data capture to update
GIS. The width of an extracted line is essential for applications in which it is
required to characterize the type of a road, for example. Edges are also
necessary in numerous other applications, e.g., building extraction and object
recognition [9].

Extracting lines commonly begins with detecting local edges which are then
aggregated into more globally defined lines using various grouping criteria.
Local edges are detected using an edge operator. Many edge detection methods
are described in the literature. There are several differential edge detctors which
are either first difference operators or second difference operators. Two popular
edge detectors are the Laplacian of the Gaussian (LoG) and Canny’s detectors.
The LoG is a differencing operator that detects edges at the zero crossings of the
image that is smoothed first with the Gaussian function. Zero crossings are the
locations at which the sign changes. Canny’s detector is similar to the LoG but
belongs to the class of directional derivatives [8].

Canny’s edge detector is used in this paper to find edges in the test image
patches. Here, the image patch is convolved with the Gaussian function. The
first directional derivative of the convolved image is computed. Upon that the
gradient magnitude at each point of the convoluted image is calculated. Non-
maximum suppression is performed in the direction of the gradient. The
resulting edge image is subjected to a threshold in order to eliminate false edges.
At last, a fine to coarse technique can be applied to mark additional edges.

Edge detectors produce an output at each pixel. To produce discrete edges,


pixels having output value over a threshold are labeled as edge pixel. Real image
edges usually cause a region of pixels around the edge that exceeds the
threshold. A thinning operation is thus employed so that only the maximum
response in the direction perpendicular to the edge is called an edge [6].
Detected edge points are then aggregated into edge segments through an edge
tracking algorithm utilizing edge attributes of neighboring points.

3. SHAPE DESCRIPTORS

A suitable representation or description of extracted edges is to be generated


utilizing attributes of edge points. A known method to represent a shape
boundary is using discrete boundary codes, a coding method of continuous
contour with a sequence of numbers, each number corresponding to a segment
direction [3]. This representation is compact, invariant to translation, but
depends on rotation and scaling transformations. However, the difference in the
code reflects the turning angles, which are invariant to the transformations.

3
Many popular methods of describing shape boundary are also available. In
tangential representations (ψ-s), the tangent ψ angle is encoded as a function of
arc length s. This is similar to differential chain codes, but don’t have to be
pixel-to-pixel. In radial representations (r-s), the distance s from the center is
encoded as a function of arc length s [8]. The Discrete Fourier Transform (DFT)
is an accurate shape descriptor that is invariant to geometric transformations. It
gives a sequence of complex coefficients called Fourier descriptors representing
the shape in the frequency domain, where lower frequencies indicate general
shape and higher frequencies denote shape details.

Although the shape descriptors using only boundary points can be used for plane
figures, descriptors utilizing interior points, as well, are likely to be more robust.
This is because small changes in area may cause considerable changes in the
boundary, especially with regard to figures with uneven boundaries.
Straightforward shape measures of a plane figure can be found from the figure
perimeter and the figure area.

The ratio Area/(Perimeter)2 is a measure that is invariant to the figure size,


position and orientation. The problem, however, is that two figures of dissimilar
shapes may have the same value of the measure. For elongated figures, a better
measure is the ratio length/width of the minimum bounding rectangle enclosing
the figure completely. Having a figure that is a projection of a 3-D object, the
changes due to perspective and occlusion would not be accounted for.

Analytical shape measures use coefficients (parameters) that are obtained by


expansion or approximation of a figure in terms of some basis functions. Those
coefficients may be combined to attain invariance to position, scale and rotation.
One of the popular methods in this regard is the moment theory-based method
that uses region-based moments to characterize the shape of an object with a set
of parameters that are invariant to geometric changes.

4. THE PROPOSED APPROACH

This section presents a proposed approach of automatic extraction of the stripes


signifying the borders of highway lanes from large scale digital images.
Accordingly, the width and centerline of the highway are determined. Images
considered for this process must have a suitable spatial resolution that
guarantees that road features manifest as elongated regions instead of single
edges. Terrestrial images as well as large scale aerial images are good examples.

The approach starts with applying Canny’s edge detector to find edges in the
image under consideration. The detected edge points are aggregated into edge
segments utilizing both gradient magnitude and orientation. Due to the fact that

4
the stripes of interest have a uniform white region against a dark background,
their edges would have closed shapes. Thus, extracted edge segments that have
no closed shapes are ignored and excluded from subsequent processes.

On the other hand, closed edge segments that start and end at the same point are
considered. For each of those segments, the number of segment points and the
coordinates of each segment point are determined. Accordingly, segment
perimeter and segment area are calculated. Another shape measure can also be
proposed here. It is the maximum of the stripe extent in the row direction and its
extent in the column direction. Since the stripes have a rectangular shape, with
the length is much larger than the width, the maximum extent of stripe is nearly
equal to one half of its perimeter, both are in pixel units.

In order to extract the desired stripes, the relevant shape measures are to be
applied to all identified closed shapes. A pre-assessment of the stripe shape as
appear in the image might be obtained by scaling real strip dimensions or by
direct measurement of stripe dimensions on the image. Having sets of stripes
where each set has a certain size, the pre-assessment process is considered for all
sets. By connecting the centroids of extracted stripes, lane borders, and highway
width and centerline can be determined.

5. EXPERIMENTATION

Two image patches, extracted from a large-scale digital aerial image and shown
in Figures (1) and (5), are used for the research experimentation. Each of the two
patches exhibit part of a highway network. The edge segments are extracted by
applying Canny’s edge detector and the edge tracking algorithm to the two
image patches, using MATLAB software package. Figures (2) and (6) illustrate
the resulted binary edge patches containing extracted edge segments in the two
image patches.

It can be noted that there are two sets of stripes in each of the two patches. The
two sets are of dissimilar stripe lengths, although they have almost similar width
of nearly 4 pixels. The short stripes have perimeters in the ranges of thirties and
forties of pixels, whereas the long stripes are of perimeters in the ranges of
seventies and eighties of pixels. Since the perimeter (P) of a stripe is twice the
sum of its length (L) and its width (W), the stripe area (A) can be set roughly
equal to W(P/2-W).

5
Prototype software has been developed and applied to the resulted binary edge
patches. The software extracts edge segments of closed figures and computes the
perimeter of each figure by summing up the number of figure points. The figure
area is estimated from the pixel coordinates of figure points. The extent in the
row direction, the extent in the column direction, and the maximum of the two
extents are also determined for each closed figure.

In order to identify the stripes of interest, the software considers each closed
figure that satisfies the criteria of perimeter and area as a candidate. The
perimeter criteria mean that the perimeter of candidate figures has to be either in
a range of 30 pixels to 50 pixels or in a range of 70 pixels to 90 pixels. The first
range is appropriate for picking the short strips, whereas the other range
supports extracting long strips. The area criteria mean that the area of candidate
figures has to be either in a range of 4(30/2-4)-T pixels to 4(50/2-4)+T pixels or
in a range of 4(70/2-4)-T pixels to 4(90/2-4)+T pixels. T is a tolerance of few
pixels, say 10 pixels. Again the two different area ranges are used for the
extraction of short and longer strips, respectively.

Other attributes of stripe shape such as Area/(Perimeter) 2 and the maximum


extent are also investigated to determine their validity in recognizing desired
stripes.

6. RESULTS AND ANALYSIS

Most stripes existing in the two image patches are successfully recognized using
the criteria of stripe perimeter and stripe area as applied on the extracted closed
figures. Figures (3) and (7) exhibit the resulted binary image patches containing
extracted stripes. Figures (4) and (8) illustrate an enlarged view of the stripes
contained in the rectangles drawn in Figures (3) and (7). Tables (1) and (2) list
the shape attributes of the stripes detected in the first and second image patches,
respectively. The listed characteristics are the perimeter (P), the area (A), the
ratio A/P2, P/2, Maximum extent (Max Ext.) and the centroid coordinates (XC
and YC) of each of those stripes.

It can be realized from the figures of Table (1) that a total of 28 short stripes and
8 long stripes are extracted from the first image patch. The perimeters of short
stripes range from 42 pixels to 46 pixels, whereas the perimeters of long stripes
range from 77 pixels to 88 pixels. The areas of those stripes vary from 65 pixels
to 70 pixels for short stripes, and from 135.5 pixels to 153 pixels for long
stripes.

6
On the other hand, figures of Table (2) reveal that a total of 144 short stripes and
6 long stripes are detected in the second image patch. The perimeters of short
stripes vary from 31 pixels to 47 pixels, and the perimeters of long stripes (6)
vary from 72 pixels to 88 pixels. The relevant stripe areas range from 47.5 pixels
to 72.5 pixels for short stripes, and from 116.5 pixels to 149 pixels for long
stripes.

The ratio A/P2 of a stripe is a scale invariant measure. However, other irrelevant
closed figurers in the image patch do have A/P2 values that are close to the
corresponding values of stripes. Thus, the A/P2 measure is found ineffective on
its own in extracting the stripes of interest.

In contrast, it is noticed that the maximum extent of any of the extracted stripes
is found almost equal to one half of its perimeter. This is obvious by looking at
the records of P/2 and Max Ext. in the two tables. The difference between those
two parameters for any of the stripes has never surpassed one pixel. Such a
correlation has been found invariant regardless the figure size, position and
orientation. In fact, it has been used as an additional constraint, in the developed
software to discard irrelevant objects that are not elongated but have closed
shapes.

It has been also found that there are some stripes that exist in the two image
patches but have not been recognized. Those stripes either do not have closed
figures or do have insignificant size. The first category includes stripes having
gaps along their boundaries or consisting of contiguous partial stripes. The other
category consists of fractional stripes of limited size compared with the common
size of short stripes. Considering such stripe size might lead to extracting some
other irrelevant objects of closed figures.

At last, since the orientation of the longer edges of stripes are almost invariable,
it can be used as an additional constraint to discriminate stripes from other
unrelated objects. The discrete boundary coding can be used here to describe the
stripe boundary in terms of chain codes, each corresponds to a segment
direction.

7. CONCLUSIONS

An approach of automatic extraction of the stripes dividing a highway into


lanes, from large scale digital images is presented. The approach starts with
applying Canny’s edge detector to find edges in the image under consideration.
The detected edge points are aggregated into edge segments utilizing attributes
of edge points. Since the stripes of interest appear in the image as closed figures,

7
extracted edge segments of closed figures are only considered. The related shape
measures are computed for and applied to all identified closed figures to extract
the desired stripes. In relation to the results of experiments carried out on two
image patches showing a part of a road network, the following conclusions can
be made:

 Most stripes existing in the two image patches are successfully recognized
utilizing the perimeter and area measures as applied to edge segments of
closed figures.
 Prior information about the dimensions of existing stripes is necessary
before starting automatic recognition. Such information can be obtained
from sample measurements on the ground or on the image.
 The relationship between the maximum extent of a stripe and its perimeter
can be employed to discard irrelevant objects that have no such elongation
as the stripes have.
 Orientation of the stripe edges can be a valuable attribute in
discriminating stripes from other irrelevant objects. Discrete boundary
coding is a popular method through which orientation is parameterized.
 As the resolution of the digital image gets higher, the stripes become
sharper and also more unique as their width in pixels gets larger. This
would let the proposed approach to perform better.

REFERENCES

[1] Baumgartner, A., Steger, C., Mayer, H., Eckstein, W. and Ebner, H. (1999)
“Automatic Road Extraction Based on Multi-Scale, Grouping, and Context.”
Photogrammetric Engineering & Remote Sensing, Vol. 65, No.7, pp. 777–785.
[2] Eker, O. and Seker, D. (2004) “Semi-automatic Road Extraction from
Digital Aerial Photographs.” International Archives of Photogrammetry and
Remote Sensing, Vol. 35, Part B3.
[3] Haralic, R. M. and Shapiro, L. G. (1992) “Computer and Robot Vision.”
Addison-Wesley, Inc., Reading, MA.
[4] Hinz, S. and Baumgartner, A. (2000) “Road Extraction In Urban Areas
Supported By Context Objects.” International Archives of Photogrammetry and
Remote Sensing, Vol. 33, Part B3, pp. 405-412.
[5] McKeown, D., McGlone, C., Cochran, S., Hsieh, Y., Roux, M., Shufelt, J.
(1996) “Automatic Cartographic Feature Extraction Using Photogrammetric
Principles” Digital Photogrammetry: An Addendum to the Manual of
Photogrammetry, ASPRS, Maryland.
[6] Nevatia R. (1982) “Machine Perception.” Prentice-Hall, Inc., Englewood
Cliffs, New Jeresy.

8
[7] Poz, A., Vale, G., and Zanin, I. (2004) “Automatic Road Segment Extraction
by Grouping Road objects.” International Archives of Photogrammetry and
Remote Sensing, Vol. 35, Part B3.
[8] Schenk T. (1999) “Digital Photogrammetry” Laurelville, OH: TerraScience.
[9] Steger, C. (2000) “Subpixel-Precise Extraction Of Lines And Edges.”
International Archives of Photogrammetry and Remote Sensing, Vol. 33, Part
B3, pp. 141-156.
[10] Trinder, J. and Wang, Y. (1998) “Knowledge-Based Road Interpretation in
Aerial Images.” International Archives of Photogrammetry and Remote Sensing,
Vol. 32, part 4, pp. 635–640.

9
Figure 1: The First Image Patch

Figure 2: Edge Segments Detected in the First Image Patch

10
Figure 3: Border Stripes Detected in the First Image Patch

Figure 4: Enlarged Border Stripes Contained in the Rectangle of Figure (3)

11
Figure 5: The Second Image Patch

Figure 6: Edge Segments Detected in the Second Image Patch

12
Figure 7: Border Stripes Detected in the Second Image Patch

Figure 8: Enlarged Border Stripes Contained in the Rectangle of Figure (7)

13
Table 1: Shape Attributes of Stripes Detected in the First Image Patch.

Stripe P A A/P² P/2 Max Ext. XC YC


1 45 68.5 0.034 22.5 23 60.33 1394.24
2 45 69.5 0.034 22.5 23 74.18 1328.76
3 44 65.0 0.034 22.0 23 88.07 1263.00
4 44 65.0 0.034 22.0 23 102.07 1197.00
5 45 67.5 0.033 22.5 23 115.73 1131.76
6 45 69.5 0.034 22.5 23 129.82 1065.76
7 46 69.0 0.033 23.0 23 143.83 1000.00
8 46 69.0 0.033 23.0 23 171.74 869.00
9 45 66.5 0.033 22.5 23 185.89 802.76
10 46 70.0 0.033 23.0 23 199.54 738.00
11 46 71.0 0.034 23.0 23 213.52 672.00
12 45 67.5 0.033 22.5 23 227.53 606.76
13 45 69.5 0.034 22.5 23 241.22 541.76
14 46 69.0 0.033 23.0 23 255.26 476.00
15 45 66.5 0.033 22.5 23 268.89 411.76
16 43 65.5 0.035 21.5 22 282.81 346.26
17 45 68.5 0.034 22.5 23 296.07 280.76
18 45 69.5 0.034 22.5 23 308.91 215.76
19 45 68.5 0.034 22.5 22 321.40 151.27
20 44 67.0 0.035 22.0 23 333.77 86.00
21 44 67.0 0.035 22.0 23 345.32 22.00
22 44 71.0 0.037 22.0 22 369.82 839.50
23 43 67.5 0.037 21.5 22 396.47 747.26
24 44 70.0 0.036 22.0 23 405.57 713.00
25 44 72.0 0.037 22.0 23 423.39 650.00
26 45 72.5 0.036 22.5 23 441.38 586.76
27 45 72.5 0.036 22.5 23 476.96 460.76
28 42 66.0 0.037 21.0 21 494.83 398.00
29 84 140.0 0.020 42.0 42 527.73 282.50
30 86 146.0 0.020 43.0 43 563.45 157.00
31 88 153.0 0.020 44.0 44 581.34 94.50
32 77 130.5 0.022 38.5 38 599.96 29.26
33 85 140.5 0.019 42.5 43 247.22 1274.76
34 82 138.0 0.021 41.0 42 264.83 1211.50
35 81 135.5 0.021 40.5 41 300.19 1084.75
36 85 141.5 0.020 42.5 43 336.69 955.75

14
Table 2: Shape Attributes of Stripes Detected in the Second Image Patch.

Stripe P A A/P² P/2 Max Ext. XC YC


1 39 59.5 0.039 19.5 20 22.74 1196.26
2 40 68.0 0.043 20.0 20 25.03 1993.38
3 38 66.0 0.046 19.0 19 32.00 1281.87
4 41 69.5 0.041 20.5 21 69.24 1727.85
5 39 67.5 0.044 19.5 19 79.77 1958.10
6 40 71.0 0.044 20.0 20 134.03 1920.72
7 38 70.0 0.048 19.0 19 187.00 1882.24
8 40 66.0 0.041 20.0 20 202.10 1606.53
9 40 69.0 0.043 20.0 20 233.95 1551.50
10 37 67.5 0.049 18.5 19 238.78 1841.97
11 36 67.0 0.052 18.0 18 239.03 1129.64
12 41 56.5 0.034 20.5 21 263.76 1724.46
13 40 67.0 0.042 20.0 20 265.85 1496.50
14 33 50.5 0.046 16.5 17 279.76 1270.33
15 35 64.5 0.053 17.5 17 287.77 1087.57
16 38 66.0 0.046 19.0 19 298.03 1441.00
17 40 68.0 0.043 20.0 20 313.13 2370.50
18 45 74.5 0.037 22.5 23 327.76 1724.80
19 38 66.0 0.046 19.0 20 329.45 1386.50
20 36 72.0 0.056 18.0 18 335.03 1044.97
21 40 67.0 0.042 20.0 20 345.05 2314.50
22 40 69.0 0.043 20.0 20 377.68 2257.03
23 34 69.0 0.060 17.0 17 382.00 1000.59
24 43 72.5 0.039 21.5 22 393.26 1725.93
25 40 70.0 0.044 20.0 20 409.33 2201.50
26 40 68.0 0.043 20.0 20 425.08 1221.50
27 35 70.5 0.058 17.5 17 427.80 955.80
28 41 69.5 0.041 20.5 20 441.39 2145.27
29 44 67.0 0.035 22.0 23 458.00 1727.64
30 40 70.0 0.044 20.0 20 456.83 1166.50
31 35 66.5 0.054 17.5 18 472.09 910.26
32 39 67.5 0.044 19.5 20 473.15 2089.26
33 35 71.5 0.058 17.5 17 482.80 1622.77
34 40 68.0 0.043 20.0 20 488.80 1111.03
35 40 68.0 0.043 20.0 20 505.73 2032.50
36 36 68.0 0.052 18.0 18 514.89 862.53
37 45 72.5 0.036 22.5 23 522.76 1729.89
38 39 68.5 0.045 19.5 20 520.28 1056.26
39 34 63.0 0.054 17.0 17 527.94 1574.53
40 44 66.0 0.034 22.0 22 532.50 2523.57

15
Table 2, continued

Stripe P A A/P² P/2 Max Ext. XC YC


41 39 65.5 0.043 19.5 20 537.77 1976.26
42 39 66.5 0.044 19.5 19 551.80 1001.77
43 37 71.5 0.052 18.5 18 556.49 813.30
44 33 52.5 0.048 16.5 17 570.76 1354.39
45 41 70.5 0.042 20.5 21 569.90 1919.76
46 36 69.0 0.053 18.0 18 572.06 1525.03
47 46 71.0 0.034 23.0 23 588.00 1733.00
48 39 66.5 0.044 19.5 19 583.74 946.77
49 43 62.5 0.034 21.5 22 596.26 2517.95
50 37 68.5 0.050 18.5 18 596.27 763.27
51 40 68.0 0.043 20.0 21 602.00 1864.00
52 38 65.0 0.045 19.0 19 615.11 892.00
53 34 57.0 0.049 17.0 17 636.71 708.00
54 40 70.0 0.044 20.0 20 646.65 837.03
55 45 67.5 0.033 22.5 23 653.24 1736.76
56 43 61.5 0.033 21.5 22 662.26 2511.98
57 38 66.0 0.046 19.0 19 670.45 660.00
58 38 63.0 0.044 19.0 19 677.95 782.53
59 36 56.0 0.043 18.0 18 689.50 1388.36
60 37 62.5 0.046 18.5 19 706.70 607.78
61 40 68.0 0.043 20.0 20 709.55 727.03
62 44 66.0 0.034 22.0 23 719.00 1740.80
63 43 65.5 0.035 21.5 22 727.74 2506.26
64 37 68.5 0.050 18.5 19 739.76 1322.76
65 38 64.0 0.044 19.0 19 740.92 672.00
66 41 70.5 0.042 20.5 20 764.27 1583.27
67 38 64.0 0.044 19.0 19 772.45 617.00
68 39 67.5 0.044 19.5 20 777.23 503.26
69 45 69.5 0.034 22.5 23 783.76 1745.29
70 38 66.0 0.046 19.0 19 780.58 1271.53
71 39 66.5 0.044 19.5 19 803.80 561.77
72 31 47.5 0.049 15.5 16 812.74 1423.74
73 38 65.0 0.045 19.0 19 812.53 450.00
74 39 69.5 0.046 19.5 19 820.64 1220.26
75 38 65.0 0.045 19.0 19 834.95 507.00
76 47 69.5 0.031 23.5 24 849.75 1750.02
77 39 65.5 0.043 19.5 20 847.13 396.26
78 44 66.0 0.034 22.0 22 858.50 2494.48
79 40 69.5 0.043 20.0 21 866.45 452.00
80 39 65.5 0.044 19.5 19 881.18 342.77

16
Table 2, continued

Stripe P A A/P² P/2 Max Ext. XC YC


81 40 69.0 0.043 20.0 20 893.75 1360.50
82 39 65.5 0.043 19.5 19 897.92 396.77
83 45 68.5 0.034 22.5 23 924.24 2488.36
84 41 69.5 0.041 20.5 20 926.54 1304.27
85 38 65.0 0.045 19.0 20 929.37 341.50
86 37 67.5 0.049 18.5 19 938.87 1066.76
87 36 59.0 0.046 18.0 18 950.11 235.50
88 39 66.5 0.044 19.5 20 958.74 1249.26
89 40 67.0 0.042 20.0 20 960.85 286.50
90 38 68.0 0.047 19.0 19 977.63 1014.53
91 38 66.0 0.046 19.0 19 984.45 182.00
92 44 66.0 0.034 22.0 22 989.50 2482.30
93 46 72.0 0.034 23.0 23 995.00 1764.89
94 38 64.0 0.044 19.0 19 992.18 232.00
95 38 68.0 0.047 19.0 19 1016.47 962.53
96 37 63.5 0.046 18.5 18 1019.27 128.27
97 39 64.5 0.042 19.5 20 1023.26 1138.74
98 39 67.5 0.044 19.5 19 1023.92 176.77
99 44 66.0 0.034 22.0 23 1055.00 2476.18
100 34 58.0 0.050 17.0 17 1050.85 917.00
101 43 63.5 0.034 21.5 22 1059.74 1774.26
102 39 66.5 0.044 19.5 20 1055.33 122.26
103 39 65.5 0.043 19.5 20 1055.82 1083.26
104 41 61.5 0.037 20.5 21 1073.27 567.76
105 38 67.0 0.046 19.0 20 1086.42 68.50
106 40 65.0 0.041 20.0 20 1088.68 1027.03
107 41 61.5 0.037 20.5 21 1088.56 446.24
108 40 59.0 0.037 20.0 20 1096.10 385.03
109 42 64.0 0.036 21.0 21 1103.69 324.00
110 45 64.5 0.032 22.5 23 1120.76 2470.00
111 44 66.0 0.034 22.0 23 1125.00 1784.66
112 39 66.5 0.044 19.5 20 1121.46 970.74
113 39 66.5 0.044 19.5 20 1153.79 915.26
114 35 62.5 0.051 17.5 18 1161.71 768.26
115 45 64.5 0.032 22.5 23 1186.76 2463.87
116 38 67.0 0.046 19.0 18 1198.97 716.50
117 36 63.0 0.049 18.0 18 1235.39 664.50
118 45 65.5 0.032 22.5 23 1252.76 2457.84
119 38 62.0 0.043 19.0 19 1249.92 751.00
120 44 67.0 0.035 22.0 23 1255.00 1807.64

17
Table 2, continued

Stripe P A A/P² P/2 Max Ext. XC YC


121 36 61.0 0.047 18.0 18 1269.50 172.28
122 37 64.5 0.047 18.5 18 1272.03 612.27
123 40 68.0 0.043 20.0 20 1281.88 696.50
124 37 64.5 0.047 18.5 19 1308.76 559.78
125 39 63.5 0.042 19.5 19 1314.51 640.77
126 44 66.0 0.034 22.0 23 1319.00 1819.20
127 45 65.5 0.032 22.5 23 1318.76 2451.84
128 34 53.0 0.046 17.0 17 1322.53 139.24
129 38 66.0 0.046 19.0 20 1344.74 506.50
130 39 64.5 0.042 19.5 20 1347.18 585.26
131 36 57.0 0.044 18.0 18 1375.50 105.89
132 38 65.0 0.045 19.0 19 1379.37 454.00
133 40 68.0 0.043 20.0 20 1380.47 528.50
134 44 67.0 0.035 22.0 22 1384.50 1831.59
135 43 62.5 0.034 21.5 22 1384.26 2446.05
136 38 65.0 0.045 19.0 20 1413.92 400.50
137 39 66.5 0.044 19.5 20 1413.74 472.26
138 35 58.5 0.048 17.5 18 1428.74 72.51
139 45 69.5 0.034 22.5 23 1449.24 1844.09
140 39 65.5 0.043 19.5 19 1445.72 417.77
141 45 67.5 0.033 22.5 23 1450.76 2440.62
142 38 66.0 0.046 19.0 19 1447.87 347.00
143 35 57.5 0.047 17.5 18 1481.29 39.37
144 43 72.5 0.039 21.5 21 1481.00 293.74
145 72 118.0 0.023 36.0 36 637.50 1373.21
146 72 119.0 0.023 36.0 36 1135.50 1516.47
147 73 116.5 0.022 36.5 37 204.25 1248.70
148 77 130.5 0.022 38.5 38 1262.74 1552.99
149 86 143.0 0.019 43.0 43 1390.00 1589.42
150 88 149.0 0.019 44.0 44 1327.50 1571.52

18

View publication stats

You might also like