Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Wireless Sensor Technologies and Applications
Previous Article in Journal
Feedback Power Control Strategies inWireless Sensor Networks with Joint Channel Decoding
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sensor for High Speed, High Precision Measurement of 2-D Positions

Electronics Department, High Polytechnic School, Alcalá University, Alcalá de Henares (28871), Madrid, Spain
*
Author to whom correspondence should be addressed.
Sensors 2009, 9(11), 8810-8823; https://doi.org/10.3390/s91108810
Submission received: 28 August 2009 / Revised: 21 September 2009 / Accepted: 23 October 2009 / Published: 3 November 2009
(This article belongs to the Section Chemical Sensors)

Abstract

:
A sensor system to measure the 2-D position of an object that intercepts a plane in space is presented in this paper. This sensor system was developed with the aim of measuring the height and lateral position of contact wires supplying power to electric locomotives. The sensor comprises two line-scans focused on the zone to be measured and positioned in such a way that their viewing planes are on the same plane. The report includes a mathematical model of the sensor system, and details the method used for calibrating the sensor system. The procedure used for high speed measurement of object position in space is also described, where measurement acquisition time was less than 0.7 ms. Finally, position measurement results verifying system performance in real time are given.

Graphical Abstract">

Graphical Abstract

1. Introduction

The use of visual information to detect the position of objects in relation to other objects is a fundamental function of computer vision systems. Many methods and applications have been developed in order to perform this task, all with their respective advantages and disadvantages with regards to computational efficiency, complexity, robustness, accuracy and performance. In the majority of cases, more than one camera [1-4], or a camera and a structured light source [5-7] have been used in order to establish the position of an object. In some applications, the use of line-scans has contributed to an overall improvement of the system, as measurement acquisition is faster than with matrix cameras, less information needs to be processed, and sensors with greater spatial resolution (higher number of pixels) can be used [8,9]. However, a disadvantage of linear sensors is that neither traditional calibration methods nor the object detection algorithms developed for matrix camera based systems can be applied [10-12]. The calibration patterns used in calibration of the matrix cameras, used in 3D measurements, can not be used for calibration of line-scan sensors, because it is virtually impossible to match the captured line-image with the interest points of the patterns (circles centers, interception lines).
In the calibration of line-scan sensors, we cannot get 3D measurements without a priori assumption of one coordinate, so it is more accurate in this case to optimize the parameters for a 2D calibration. A line-scan calibration method using calibration pattern line-images in different positions is presented in [13]. Although this requires extreme precision when positioning the pattern, and could thus represent a disadvantage, [14] describes the resolution of this potential problem through the use of calibration patterns at different depths. Nevertheless, when more than one line scan is used, these methods are only capable of obtaining individual intrinsic parameters for each line-scan. However, it is not possible to obtain accurate extrinsic parameters, as one of the limitations of these methods is that the line-scan sensor array must be approximately parallel to the calibration pattern planes. If two line-scans are used to measure position with triangulation techniques, both sensors must be separated and thus the pattern cannot be situated in such as way that the planes are parallel to the sensor arrays.
In this study, we used a 2-D sensor based on two line-scans. Section 2 describes the sensor employed and the sensor modeling. Section 3 presents the calibration method used. Section 4 gives the experimental results. Finally, Section 5 summarizes the main conclusions.

2. Sensor System

The overall function of the sensor system is based on the capture of contact wire images with two line-scans. Following line-image processing and triangulation, it is possible to calculate 2-D coordinates for the objects in relation to a specific reference system.
The system comprises a computer with an image acquisition and processing board (IAB) for each line-scan, as shown in Figure 1. This board is responsible for information transfer (images and control) between the cameras and the PC. The PC performs line-image processing to determine the 2-D coordinates for objects. Image acquisition, control, processing and data presentation is carried out using software in C language.

2.1. Sensor Modeling

The reference coordinate system does not usually coincide with camera or line-scan coordinates (Figure 2). To resolve this issue, and thus obtain coordinates for the line-image of a 2-D point in space with respect to a reference system, a projective transformation in 2-D is performed. This enabled us to obtain camera system coordinates which corresponded to a scene point.
Using Figure 2 and a 2-D projective transformation, it was possible to change from one coordinate system to the other (Equation 1):
P c = R P w + T
where R is the rotation matrix defined by:
R = [ r 11 r 12 r 21 r 22 ] = [ cos ( α ) sin ( α ) sin ( α ) cos ( α ) ]
and T is the translation vector which defines the relative position between the optical centre of the line-scan camera and the world coordinates centre (Equation 3):
T = [ t x t y ]
If Equation (1) is expressed by homogeneous coordinates, we obtain Equation (4):
[ x C y C 1 ] = [ r 11 r 12 t X r 21 r 22 t Y 0 0 1 ] [ x W y W 1 ]
the values represented in the projection model were calculated through camera calibration. These extrinsic parameters (tx, ty, α) link the relative position between the world coordinate system and the camera coordinate system.
Using the pin-hole camera model for a 1-D sensor, as is the case of line-scans, the projection of a point Pc(xc, yc) from the scene onto a line-image will bear the coordinate x:
x = f x C y C
If (5) is shown in matrix form and with homogeneous coordinates, we obtain:
[ m x m ] = [ f 0 0 0 1 0 ] [ x c y c 1 ]
Substituting (4) in (6), a general expression is obtained for relating a point in the world coordinate system with its corresponding projection onto the line-image:
[ m x m ] = [ f 0 0 0 1 0 ] [ r 11 r 12 t x r 21 r 22 t y 0 0 1 ] [ x w y w 1 ]
A diagram explaining the pin-hole model for a line-scan is shown In Figure 3. If x is represented by pixel coordinates xim, and we take into account that the optical axis may coincide with a pixel cx different to the centre of the sensor, it is then possible to formulate the Equation (8).
x i m = x s X + c X
The scale factor sx (mm/pixel) is the parameter which relates the line-image system of metric coordinates to the pixel array coordinate system provided by the line-scan. This value corresponds to pixel size. In this case, the theoretical value given by the manufacturer is 12 μm.
Substituting (5) in (8) and doing fx = f/sx, the Equation (9) is obtained, which models the line-scans according to the parameters of the pin-hole model:
x i m = f X x C y C + c X
Using (9), Equation (7) can be rewritten in the following form:
[ m x i m m ] = [ f x c x 0 1 ] M int [ r 11 r 12 t x r 21 r 22 t y ] M ext [ x w y w 1 ] = M int M ext [ x w y w 1 ]
When the intrinsic parameter matrix Mint and the extrinsic parameter matrix Mext are multiplied, a general expression for the projection matrix M = Mint · Mext is obtained, which represents the relation between the scene points and their projection onto a line-image.
If the matrix coefficients are represented by m11m23, (10) they can be rewritten as (11):
[ m x i m m ] = [ m 11 m 12 m 13 m 21 m 22 m 23 ] M [ x w y w 1 ]

3. Calculation of Calibration Parameters

An alternative method for obtaining projection matrix M coefficients is to assign a value to one of the coefficients (in this case, the value m23 = 1 is chosen), and to express the other projection matrix coefficients according to this value (12). Thus, the Direct Linear Transformation (DLT) coefficient vector LT = [L1 L2 L3 L4 L5] is obtained:
[ m 11 / m 23 m 12 / m 23 m 13 / m 23 m 21 / m 23 m 22 / m 23 1 ] = [ L 1 L 2 L 3 L 4 L 5 1 ]
If the DLT coefficient vector is substituted, and the matrices in (11) are multiplied, the unknown quantity producing scene point projection onto the line-image is found:
x i m = [ x w y w 1 x w x i m y w x i m ] L T
As can be seen, Equation (13) has five DLT coefficients (L1L5), so at least five 2-D point correspondences, visible to both line-scans, are necessary. Therefore, the pattern must have at least five known points to establish their correspondence with the captured line-image.
The number of points of correspondence between the real world and line-images is represented by h. The more points used, the greater calibration accuracy becomes. A matrix of h rows is formed, where each row corresponds to a point in the pattern:
[ x i m 1 x imh ] B = [ x w 1 y w 1 1 x w 1 x i m 1 y w 1 x i m 1 x w h y w h 1 x w h x imh y w h x imh ] A L T
To find L, the least squares estimate is used:
L = A ( A T A ) 1 A T B
The use of m23 = 1 is justified because the solution is subject to a scale factor, given that the projection matrix is homogeneous. The parameter m23 is the ty component of the translation vector which locates the line-scan in the world 2-D reference system. Thus, if ty were null, it would not be valid for the proposed solution. The parameter m23 is obtained from the L4 and L5 parameters of the DLT coefficient matrix:
m 23 = 1 L 4 2 + L 5 2 = t y
To obtain the projection matrix M from the DLT coefficients, an inverse scale change is carried out. This is achieved by multiplying each of the elements calculated by m23:
M = m 23 [ L 1 L 2 L 3 L 4 L 5 1 ]
Intrinsic parameters
Once the projection matrix has been calculated, calculation of the intrinsic parameters is a simple operation:
[ f x c x ] = [ m 23 L 5 m 23 L 4 m 23 L 4 m 23 L 5 ] 1 [ m 23 L 1 m 23 L 2 ]
Extrinsic parameters
The extrinsic parameters are obtained as follows:
t y = m 23 = 1 L 4 2 + L 5 2
α = sin 1 ( m 23 L 5 )
t x = c x t y m 23 L 2 f x

3.1. Calibration Pattern

A fundamental steep in the calibration process is the selection of an adequate calibration pattern. As for the calibration of matrix cameras, 3-D patterns offer the best results for line-scans calibration. In this case, calibration was carried out using a 3-D calibration pattern, comprising a series of parallel threads in different positions (Figure 4). The pattern is located in such a way that the threads cross the vision plane of the line-scans perpendicularly, as shown in Figure 5.
When many reference threads are used in the proposed pattern, overlapping of the different threads projected may occur. This can be detected by the lack of concordance between the number of reference points in our pattern and the number of points seen in the line-images.

3.2. Calibration Results

In our case, calibration was carried out using a total of 16 threads in the pattern. Table 1 gives the calibration parameters obtained, and calibration error, ε. This error quantifies the difference between the coordinates for each point on the real line-images xim_real with respect to those calculated by means of its projection xim_proy. Applying the projection matrix xim for each of the calibration pattern h points:
ε = 1 h i = 1 h ( x im_proy i x im_real i )
With the value of sx = 12 μm and the scale factors obtained, focal length for each line-scan can be calculated. Focal length of the left hand line-scan is fL = 30.7487 mm, and focal length of the right hand line-scan is fR = 30.4995 mm.

3.3. Calculation of 2-D Position with Two Calibrated Line-scan

Once both line-scans have been calibrated, it is possible to obtain a correspondence between a 2-D point and its projection on both line-images. The model for a single calibrated line-scan, based on a pin-hole model, can be expressed by (11). If the same calibration pattern, located in a particular position, is used to calibrate both line-scans, the Equation (23) can be obtained, which establishes the relation between the two previous models, and yields the parameters [xw,yw] according to the line-images captured for each line-scan, and the corresponding projection matrices:
[ m 11 R m 12 R x i m R m 21 R m 22 R 1 ] [ x w y w m R ] = [ m 13 R m 23 R ] [ m 11 L m 12 L x i m L m 21 L m 22 L 1 ] [ x w y w m L ] = [ m 13 L m 23 L ] }
the system of linear equations (23) represents two straight lines which are cut at the points [xw,yw]. In this system of equations, the other unknown parameters are mL and mR. To obtain the geometric location of the points [xw,yw], an inverse operation to that carried out for Equation (23) is performed, giving (24):
[ x w y w m L m R ] = [ m 11 L m 12 L x i m L 0 m 21 L m 22 L 1 0 m 11 R m 12 R 0 x i m R m 21 R m 22 R 0 1 ] 1 [ m 13 L m 23 L m 13 R m 23 R ]
with the projection matrices for each line-scan and the Equation (24), it is possible to calculate the 2-D position of a point in the measurement zone, whose projection in each line-image is x i m L and x i m R. Using the n (n = 2048) values of x i m L and x i m R, we calculated two matrices of nxn (2048 × 2048), where each value corresponded to the position in xw (lateral decentring) and the position in yw (height). We called these matrices the “Sensor Matrices”.
To summarise, the system for measuring 2-D position is defined by two matrices, called “sensor matrices”, which contain the coordinates (x,y) for each scene point projected onto the line-scans at the coordinate x i m L and x i m R. By reading these matrices, the geometric location of a scene point can be identified. For this, it is only necessary to know pixel position in the projection of the object to be measured on each line-scan. These pixel values are then used to read the sensor matrices, stored during the calibration process. In this way, the calculation time for measurements carried out in real time is reduced.

4. Experimental Results

In this section, we describe some practical experiments which were carried out with the aim of verifying the performance of the measuring system proposed. The experiments were aimed at establishing the accuracy of measurements under real operating conditions. In order to achieve this, measurements were taken of a moving contact wire, to verify efficiency of the tracking algorithms. In addition, static measurements were taken of the threads in different positions, in order to calculate magnitude of error in measurements.

4.1. Measuring the 2-D Position of Static Objects

The aim of this experiment was to verify the accuracy of lateral decentring (X) and height (Y) measurements taken with the system when the system sensor and the measured objects were static. The calibration pattern structure, with 16 white 0.5 mm diameter threads, was placed in a known position with the coordinates (XS1real,YS1real). The base reference system was situated at a central point between the two line-scans. Once the position measurements had been taken with the system sensor (XS1sensor,YS1sensorl), the values obtained were compared with the real values. Figure 6 shows the calibration pattern with the 16 reference threads placed in the sensor measurement zone. The various threads are numbered and marked by a yellow dot.
As an example, Table 2 gives both real and measured coordinate values for the threads in one particular calibration pattern position. The same experiment was carried out for different thread and pattern positions, but always ensuring that the position of the threads within the measurement zone coincided. From a total of 524 measurements, maximum error of x was 2.1 mm, and standard error was 0.82 mm. For height measurements (y), maximum error was 3.2 mm and standard error was 0.94 mm. This error was due to incorrect thread placement, calibration error and/or sensor system resolution.

4.1. Measuring the 2-D Position of a Moving Object

This second experiment aimed to verify the validity of the monitoring algorithm and the system's capacity for measuring the position of an object (contact wire) moving at high speed.
To move the contact wire one end was attached to a bearing placed in a constant position so that only the wire could rotate. The other end was attached to another support located on an aluminium bar which in turn was attached to an engine rotor. The engine rotates the aluminium bar parallel to the plane of view of the line-scans. The rotor axis was positioned so that whatever the position of the rotating bar, the contact wire was always located within the field of vision of the cameras. A photograph of the experimental structure assembled in order to generate contact wire movement is shown in Figure 7.
The graphs in Figure 8 give height and decentring measurements at a sample speed of 100 frames per second (fps). The small jumps in the curves are due to the experimental structure used.
To verify real time sensor system operation, line-scan tests were carried out for various acquisition times. Maximum acquisition and processing speed without loss of samples was 1,430 frames per second, using a PC with P4 2.0 GHz processor and 512 MB of RAM. This high speed was achieved because it was only necessary to find centroids of the line-images captured. Once the centroids had been obtained, a matrix reading is sufficient to obtain the values of x and y.

5. Conclusions

This paper has presented a 2-D sensor system based on two line-scans. Among other applications, it can be used to verify the geometry of contact wires supplying power to electric locomotives. A mathematical model has been reported.
In addition, has been proposed and described a method for calibrating the sensor system. This method is based on the calibration of both line-scans using the calibration pattern position. The calibration pattern is a 3-D structure with various parallel threads attached. Calibration provides the matrices containing the coordinates (x,y) for each scene point, corresponding to the projection of these points onto each line-scan. To obtain the coordinates (x,y) is only necessary to know pixel position in the projection of the object in each line-scan. These pixel values are then used to read the sensor matrices which contain the coordinates (x,y). In this way, processing time may be less than 0.7 ms.
Experiments were carried out in order to verify system operation. The first experiment examined static measurement error, and from a total of 524 measurements, the maximum error of x was found to be 2.1 mm, with a standard error of 0.82 mm. In the case of height measurement (y), maximum error was 3.2 mm and standard error was 0.94 mm. The second experiment measured contact wire position when moving, in order to verify monitoring algorithms. Maximum acquisition and processing speed without sample loss was 1430 frames per second, using a PC with P4 2.0 GHz processor and 512 MB of RAM.

Acknowledgments

This work was funded through project T5/2006, sponsored by the Spanish Ministry of Public Works (Ministerio de Fomento).

References and Notes

  1. Monaco, J.; Bovik, A.C.; Cormack, L.K. Epipolar Spaces for Active Binocular Vision Systems. Proceedings of IEEE International Conference on Image Processing, San Antonio, TX, USA, September 16–19, 2007; pp. 549–551.
  2. Park, J.; Kak, A.C. A New Approach for Active Stereo Camera Calibration. Proceedings of IEEE International Conference on Robotics and Automation, Rome, Italy, April 10–14, 2007; pp. 3180–3185.
  3. Chung, R. Correspondence Stereo Vision under General Stereo Camera Configuration. Proceedings of IEEE International Conference on Robotics, Intelligent System and Signal Processing, Changsha, Hunan, China, October 8–13, 2003; pp. 405–410.
  4. Dornaika, F.; Chung, C.R. Stereo Geometry From 3-D Ego-Motion Streams. IEEE Trans. Syst. Man Cybern. 2003, 33, 308–323. [Google Scholar]
  5. Lazaro, J.L.; Lavest, J.M.; Luna, C.A.; Gardel, A. Sensor for Simultaneous High Accurate Measurements of Three-Dimensional Points. Sens. Lett. 2006, 4, 426–432. [Google Scholar]
  6. Guan, C.; Hassebrook, L.G.; Lau, D.L. Composite Structured Light Pattern for Three-dimensional Video. Opt. Express 2003, 11, 406–417. [Google Scholar]
  7. Xu, L.; Zhang, Z.J.; Ma, H.; Yu, Y.J. Real-Time 3D Profile Measurement Using Structured Light. Proceedings of International Symposium On Instrumentation Science And Technology, Harbin, China, August 8–12, 2006; pp. 339–343.
  8. Kataoka, K.; Osawa, T.; Ozawa, S.; Wakabayashi, K.; Arakawa, K. 3D Building Facade Model Reconstruction Using Parallel Images Acquired by Line Scan Cameras. Proceedings of IEEE International Conference on Image Processing, Genova, Italy, September 11–14, 2005; pp. 1009–1012.
  9. Kim, J.H.; Ahn, S.; Jeon, J.W.; Byun, J.E. A High-speed High-resolution Vision System for the Inspection of TFT LCD. Proceedings of IEEE International Symposium on Industrial Electronics, Pusan, Korea, June 12–16, 2001; Vol. 1. pp. 101–105.
  10. Heikkilä, J. Geometric Camera Calibration Using Circular Control Points. IEEE Trans. Patt. Anal. Mach. Int. 2000, 22, 1066–1077. [Google Scholar]
  11. Zhengyou, Z. Flexible Camera Calibration By Viewing a Plane From Unknown Orientations. Proceedings of International Conference on Computer Vision, Corfu, Greece, September 26–27, 1999; pp. 666–673.
  12. Douxchamps, D.; Chihara, K. High-Accuracy and Robust Localization of Large Control Markers for Geometric Camera Calibration. IEEE Trans. Patt. Anal. Mach. Int. 2009, 31, 376–383. [Google Scholar]
  13. Horaud, R.; Roger, M.; Lorecki, B. On Single-Scanline Camera Calibration. IEEE Trans. Robotics Automat. 1993, 9, 71–75. [Google Scholar]
  14. Luna, C.A.; Mazo, M.; Lázaro, J.L.; Vázquez, J.F.; Ureña, J.; Palazuelos, S.E.; García, J.J.; Espinosa, F.; Santiso, E. Method to Measure the Rotation Angles in Vibrating Systems. IEEE Trans. Instrum. Meas. 2006, 55, 232–239. [Google Scholar]
Figure 1. Block diagram of 2-D sensor system.
Figure 1. Block diagram of 2-D sensor system.
Sensors 09 08810f1
Figure 2. Relation between line-scan coordinate system and world coordinate system.
Figure 2. Relation between line-scan coordinate system and world coordinate system.
Sensors 09 08810f2
Figure 3. Diagram explaining the pin-hole model for a line-scan.
Figure 3. Diagram explaining the pin-hole model for a line-scan.
Sensors 09 08810f3
Figure 4. Calibration pattern comprising threads.
Figure 4. Calibration pattern comprising threads.
Sensors 09 08810f4
Figure 5. Diagram that show the viewing planes crossed by calibration pattern threads.
Figure 5. Diagram that show the viewing planes crossed by calibration pattern threads.
Sensors 09 08810f5
Figure 6. Diagram showing the planes of vision to be crossed by the calibration pattern threads. The distance between the line-scans is 106.5 cm, and the angles αLS_L = 67.68 degrees and αLS_R = 67.28 degrees.
Figure 6. Diagram showing the planes of vision to be crossed by the calibration pattern threads. The distance between the line-scans is 106.5 cm, and the angles αLS_L = 67.68 degrees and αLS_R = 67.28 degrees.
Sensors 09 08810f6
Figure 7. Structure assembled in order to generate contact wire movement.
Figure 7. Structure assembled in order to generate contact wire movement.
Sensors 09 08810f7
Figure 8. Contact wire measurements at a sample speed of 100 fps: Height and lateral decentring.
Figure 8. Contact wire measurements at a sample speed of 100 fps: Height and lateral decentring.
Sensors 09 08810f8
Table 1. Calibration results.
Table 1. Calibration results.
ParameterLeft hand Line-scanRight hand Line-scan
tx, cm53.34−53.35
ty, cm106.35106.12
α, degrees−21.821.7
fx2,577.02,565.9
cx, pixels1,033.91,019.7
ε, pixels0.630.49
Table 2. Real and sensor measured coordinates for different points (threads).
Table 2. Real and sensor measured coordinates for different points (threads).
ThreadXS1 real
mm
XS1 sensor
mm
Error
mm
YS1 real
mm
YS1 sensor
mm
Error
mm
1−490−489.63220.36781,0701,069.56310.4369
2−220−219.76680.23321,0701,071.8171.817
300.12670.12671,0701,069.87880.1212
4220221.28021.28021,0701,069.58360.4164
5490489.86260.13741,0701,071.02241.0224
6−490−490.00870.00871,3701,368.66441.3356
7−220−220.59750.59751,3701,369.98050.0195
80−0.56680.56681,3701,369.81840.1816
9220219.17450.82551,3701,370.54710.5471
10490490.99520.99521,3701,369.97670.0233
11−220−220.21040.21041,6701,671.00661.0066
120−0.46160.46161,6701,669.88440.1156
13220219.57980.42021,6701,668.96081.0392
14−220−219.90840.09162,0302,031.59141.5914
1501.77551.77552,0302,029.97060.0294
16220218.97721.02282,0302,030.54240.5424

Share and Cite

MDPI and ACS Style

Luna, C.A.; Lázaro, J.L.; Mazo, M.; Cano, A. Sensor for High Speed, High Precision Measurement of 2-D Positions. Sensors 2009, 9, 8810-8823. https://doi.org/10.3390/s91108810

AMA Style

Luna CA, Lázaro JL, Mazo M, Cano A. Sensor for High Speed, High Precision Measurement of 2-D Positions. Sensors. 2009; 9(11):8810-8823. https://doi.org/10.3390/s91108810

Chicago/Turabian Style

Luna, Carlos A., José L. Lázaro, Manuel Mazo, and Angel Cano. 2009. "Sensor for High Speed, High Precision Measurement of 2-D Positions" Sensors 9, no. 11: 8810-8823. https://doi.org/10.3390/s91108810

Article Metrics

Back to TopTop