Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Water Quality Sensing and Spatio-Temporal Monitoring Structure with Autocorrelation Kernel Methods
Previous Article in Journal
A Wireless Monitoring System Using a Tunneling Sensor Array in a Smart Oral Appliance for Sleep Apnea Treatment
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Camera Calibration Robust to Defocus Using Phase-Shifting Patterns

1
Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei 230026, China
2
Department of Instrument Science and Opto-Electronics Engineering, Hefei University of Technology, Hefei 230088, China
3
School of Automation, Wuhan University of Technology, Wuhan 430070, China
*
Authors to whom correspondence should be addressed.
Sensors 2017, 17(10), 2361; https://doi.org/10.3390/s17102361
Submission received: 16 August 2017 / Revised: 10 October 2017 / Accepted: 12 October 2017 / Published: 16 October 2017
(This article belongs to the Section Physical Sensors)
Figure 1
<p>The phase-shifting circular patterns. (<b>a</b>–<b>c</b>) Images of the 4 × 4 PCG array patterns; (<b>d</b>) Wrapped phase of single PCG.</p> ">
Figure 2
<p>The map of the feature detection of the proposed method.</p> ">
Figure 3
<p>The schemes of the sorting algorithm. (<b>a</b>) To solve B and D using the straight line <span class="html-italic">l</span><sub>0</sub> and the final point C using the straight <span class="html-italic">l</span><sub>1</sub>; (<b>b</b>) Reordering the vertexes and labeling the feature points.</p> ">
Figure 4
<p>Errors regarding the number of PCGs. (<b>a</b>) Relative error for focal length; (<b>b</b>) Absolute error for principal point; (<b>c</b>) RMSEs with different number of PCGs.</p> ">
Figure 5
<p>Errors regarding the period of PCG. (<b>a</b>) Relative error for focal length; (<b>b</b>) Absolute error for principal point; (<b>c</b>) RMSEs with different periods of PCG.</p> ">
Figure 6
<p>Errors regarding the noise level of the patterns. (<b>a</b>) Relative error for focal length; (<b>b</b>) Absolute error for principal point; (<b>c</b>) RMSEs with different noise levels.</p> ">
Figure 7
<p>Set up of the real experiment.</p> ">
Figure 8
<p>Captured images of different rotation angle and the images of enlarged region of PCG center. (<b>a</b>–<b>c</b>) The images of 15°, 30° and 45° respectively; (<b>d</b>–<b>f</b>) The enlarged region of PCG center of 15°, 30° and 45° respectively.</p> ">
Figure 9
<p>The captured images of different defocus degrees. (<b>a</b>–<b>c</b>) The images of PCG arrays of different defocus degrees respectively; (<b>d</b>–<b>f</b>) The images of concentric circle array of different defocus degrees respectively.</p> ">
Figure 10
<p>Wrapped phase with the calculated imaged centers of different defocus degrees. (<b>a</b>–<b>c</b>) The wrapped phase of <a href="#sensors-17-02361-f009" class="html-fig">Figure 9</a>a–c respectively.</p> ">
Versions Notes

Abstract

:
Camera parameters can’t be estimated accurately using traditional calibration methods if the camera is substantially defocused. To tackle this problem, an improved approach based on three phase-shifting circular grating (PCG) arrays is proposed in this paper. Rather than encoding the feature points into the intensity, the proposed method encodes the feature points into the phase distribution, which can be recovered precisely using phase-shifting methods. The PCG centers are extracted as feature points, which can be located accurately even if the images are severely blurred. Unlike the previous method which just uses a single circle, the proposed method uses a concentric circle to estimate the PCG center, such that the center can be located precisely. This paper also presents a sorting algorithm for the detected feature points automatically. Experiments with both synthetic and real images were carried out to validate the performance of the method. And the results show that the superiority of PCG arrays compared with the concentric circle array even under severe defocus.

1. Introduction

Camera calibration is always the first and irreplaceable process in vision systems such as three-dimensional (3D) measurement, microscopy and robot navigation [1,2,3]. Since the calibration accuracy directly influences the performance of the systems, numerous calibration methods have been put forward and various targets have been designed during recent decades. These calibration methods can be roughly divided into two categories: object-based calibration and self-calibration. Also there are three types of objects according to their dimensionalities, namely 3D target [4], 2D target [5,6], and 1D target [7]. Self-calibration does not need any designed targets, yet its accuracy is limited in presence of noise [8].
To our knowledge, 2D targets have been employed widely due to their ease to manufacture and flexibility to use. There are two common patterns of 2D target: grids [9,10] and circles [11,12,13,14,15,16]. By comparison, the circle pattern has been becoming the research hotspot due to its rich geometric properties and high recognition. The earliest idea directly used the center of projection ellipse as the center of a spatial circle. However, as we know, the idea is improper under general perspective [15]. To estimate the real projected centers of circle patterns precisely, Kim et al. [17] proposed a simple method based on concentric circles, with known radii information. After that, many researchers have concentrated on finding various geometric or algebraic constraints of concentric circles to estimate the camera parameters. Zhang et al. [18] presented a solution to efficiently recover the projections of the circle center from the concentric images. The problem is formulated into a first order polynomial eigenvalue problem by considering the pole-polar relationship. Subsequently, Chen et al. [19] suggested a method of calibration based on a planar pattern containing concentric circle array. The imaged centers of concentric circles are located using the principles of the cross-ratio and the pole-polar. To generally improve the computational efficiency, Yang et al. [20] introduced a method only using four intersections and two cross-ratio equations to solve the imaged centers with the concentric circle array.
These calibration methods are primarily developed for short-range vision systems [21]. The accurate calibration requires the well-focused pattern images. However, if the targets are applied in the long-range vision systems, the camera is usual out-of-focus. Thus, the calibration result is not reliable. In this scene, if the systems require highly accurate calibration results, the targets should be large enough to ensure that the captured images are sharp enough [22]. Evidently, it is a great challenge for long-range systems since fabricate large targets are becoming difficult in terms of accuracy, feasibility, and cost.
To tackle the problem, researchers have designed the targets whose feature points are encoded into the phase domain. With these targets, accurate calibration results can be achieved even if the images are blurred. Schmalz et al. [23] performed camera calibrations with horizontal and vertical phase-shift sequences whose phase distribution was robust against defocusing. Huang et al. [24] employed eight-frame phase-shifted fringe patterns as active targets and further improved calibration accuracy. Bell et al. [21] utilized a set of horizontal and vertical phase-shift fringe patterns to calibrate an out-of-focus camera and An et al. [22] used Bell’s method to calibrate a large-range structured light system. Definitely, the above methods often required that multiple images should be captured at each camera pose and demanded more human interaction, which are laborious and inefficient. In our previous works [25,26], we proposed a method to calibrate the camera with defocused images, but directly used the center of projection ellipse as the real center of a phase-shift circular grating, which can be improved. Since the projective of a circle is not invariant, the circle center which is recovered directly from the center of the projection ellipse is not the real projection [15].
In this paper, in consideration of the accuracy of real center location and the application of the target in long-vision system, we propose an efficient approach utilizing the phase-shifting circular grating arrays to calibrate the camera even with the defocused images. We formulate the feature extraction as a concentric circle issue to estimate real imaged centers of PCGs rather than directly using the centers of projection ellipse as the real centers. Thus, the imaged center can be located accurately. Instead of a set of phase-shift patterns, we just need three frames at each pose, which can reduce the workload and improve the efficiency. The wrapped phases are calculated by the three-step phase-shift algorithm [27]. Zero-phase points are roughly detected by Canny algorithm and optimized for sub-pixel precision. We evaluate the performance of the proposed method on synthetic and real data. Moreover, in the contrast experiment with concentric circle array pattern, the proposed method shows its superiority of high accuracy and insensitivity to image defocusing.
Section 2 explains the related works of the proposed camera calibration method, including the camera model, the circle projection model, the pole-polar relationship and the real imaged center estimation. In Section 3, the proposed method is presented. Experimental results on synthetic and real data groups are showed in Section 4. Lastly, Section 5 gives a brief conclusion.

2. Related Works

2.1. Camera Model

The camera model is a set of mathematical equations that describe the relationship between a 3D world point and its projection onto the camera images plane. For a 3D point P = ( X W Y W Z W ) , its corresponding image point is p = ( u    v ) . P ˜ and p ˜ denote their homogenous coordinates. The imaging process can be simplified as:
s p ˜ = K [ R t ] P ˜
here R and t, called the extrinsic parameters, represent the rotation and translation matrix from the world coordinate system to the camera coordinate system, respectively; s is a scale factor; and k is the intrinsic matrix that can be denoted as:
K = [ f u β u 0 0 f v v 0 0 0 1 ]
where fu and fv are the focal lengths of the camera along u and v directions, respectively; β is the aspect factor, and (u0, v0) is the principal point. If the camera lens is nonlinear, the distortion coefficients can be modeled as D = [ K 1 K 2 p 1 p 2 K 3 ] T , where k1, k2 and k3 are the radial distortion coefficients, p1 and p2 represent the tangential distortion coefficients. For simplicity, the radial distortion coefficients k1 and k2 are considered, since the distortion function is mainly dominated by the radial components [5,6].

2.2. Circle Projection Model

The common expression of a spatial circle is ( x x 0 ) 2 + ( y y 0 ) 2 = r 2 , which can be expressed in matrix from as:
[ x y 1 ] C [ x y 1 ] T = 0 w i t h C = [ 1 0 x 0 0 1 y 0 x 0 y 0 x 0 2 + y 0 2 r 2 ]
where (x, y) is the point on the circle; (x0, y0) is the circle center; r is the radii. Similarly, a 2D ellipse curves a x 2 + b y 2 + c x y + d x + e y + f = 0 can be presented in equivalent matrix form as:
[ x y 1 ] E [ x y 1 ] T = 0 w i t h E = [ a c / 2 d / 2 c / 2 b e / 2 d / 2 e / 2 f ]
Obviously, the spatial circle C is in world space, but its circumference distributes on x-y plane (z = 0). The ellipse curves as the projections of the spatial circle are on the image plane. So the matrix from of them can be written as:
P ˜ T C P ˜ = 0
p ˜ T E p ˜ = 0
Combining Equations (1), (5) and (6), the transformation relationship between the spatial circle and its projection ellipse curves can be obtained:
s E = ( K [ R t ] ) T C ( K [ R t ] ) 1

2.3. Pole-Polar Relationship

For a spatial circle C, in the same plane, there exists a relationship between a point p and a line l:l = Cp. The point p is the pole of l with respect to C, and the line l is the polar of p. Furthermore, if p is the projection of the circle center, and l is the intersection line (vanishing line) of the supporting plane with the plane at infinity. In the image plane, E is the projection conic of a spatial circle. Then the formula can be obtained as followed [28]
l = λ E p
where λ is a constant factor.

2.4. Circle Center Estimation

As we know, the circle center cannot be recovered directly from the image for its projective is not invariant. Therefore, the way treating the centers of projected ellipses as the real imaged centers is unreliable. In the literature, the real imaged center could be computed from geometric, algebraic as well as the pole-polar relationship constraints on the projection of concentric circles [18,19,20,29]. Here, we estimate the real imaged center from three PCG images based on the theory mentioned above.
Assuming that C 1 and C 2 are the two spatial concentric circles, their projections conic can be E 1 and E 2 . From Equation (7), we know the transformation relationship between the spatial circles and its projection ellipse curves. So we can obtain the equations as follows
s 1 E 1 = Q T C 1 Q 1
s 2 E 2 = Q T C 2 Q 1
where Q = K [ R t ] ; s 1 and s 2 are the non-zero scale factors. Subtracting the equations in (9) and (10), we get
s 1 E 1 s 2 E 2 = Q T C 1 Q 1 Q T C 2 Q 1 =    Q T [ 0 0 0 0 0 0 0 0 r 2 2 r 1 2 ] Q 1
The radius of two concentric circles is exactly different, so r 2 2 r 1 2 is a non-zero. The property of similarity transformation notified that the matrix in Equation (11) has a pair of identical eigenvalues, which are different from the third one.
Apparently, the conclusion provides a clue to improve the computational efficiency of solving the circle center. For the concentric circles, assuming its imaged center o and the vanishing line l , From Equation (8), we have
l = λ 1 E 1 o
l = λ 2 E 2 o
where λ 1 and λ 2 are the non-zero scale factors. Subtracting the equations in (12) and (13), we get
( s E 2 E 1 ) o = 0 ,   with   s = λ 2 / λ 1
For Equation (14), it is another equivalent form of Equation (11). We can use the MATLAB to solve it by the function polyeig(). There are three eigenvalues obtained since the matrix size is 3 × 3. According to the conclusion, two of them are identical and are different from the third one. The corresponding eigenvector of the third eigenvalue is the circle center [18].

3. Proposed Method

3.1. Phase-Shifting Pattern

Here we present the phase-shifting circular grating (PCG) patterns that encode the feature point into the intensity to calibrate the camera. While, for phase-shifting circular gratings, the images I k d ( x , y ) are displayed on a monitor that can be expressed as [30]:
I k d ( x , y ) = a + b cos ( Φ d ( x , y ) + 2 π ( k 1 ) K )
where k = 1, 2, …, k; Φ ( x , y ) = 2 π r ( x , y ) / T denotes the unwrapped phase; T denotes the period of the phase-shifting circular gratings; radius r ( x , y ) = ( x - x 0 ) 2 + ( y - y 0 ) 2 is the Euclidean distance between points ( x , y ) of the phase-shifting circular grating and its center (x0, y0); a and b can adjust the intensity of the patterns; Then, once they are captured by a camera and can be described as
I k c ( u , v ) = A ( u , v ) + B ( u , v ) cos ( ϕ c ( u , v ) + 2 π ( k 1 ) K )
where A ( u , v ) is the average intensity, and B ( u , v ) is the intensity modulation of the phase-shifting patterns. When K 3 , A ( u , v ) , B ( u , v ) and ϕ c ( u , v ) can be obtained by the following,
A ( u , v ) = 1 K k = 1 K I k c ( u , v )
B ( u , v ) = 2 K [ k = 1 K I k c ( u , v ) sin ( 2 π k K ) ] 2 + [ k = 1 K I k c ( u , v ) cos ( 2 π k K ) ] 2
ϕ c ( u , v ) = arctan k = 1 K I k c ( u , v ) sin 2 π k / K k = 1 K I k c ( u , v ) cos 2 π k / K
With the phase-shifting patterns captured by the camera, the wrapped phase can be computed by Equation (19).
The pattern employed in this method consists of several identical circular gratings as shown in Figure 1a–c and we set k = 3 and a = b = 0.5. Since there is a linear relationship between the unwrapped phase and r ( x , y ) , the points with same phase are distributed on a same circle. As zero-phase detection has the highest precision, we used a phase-shift technique to detect zero-phase points [31]. Especially, the zero-phase points are distributed on a circle with r ( x , y ) = mT, m = 1, 2, 3… In the literature, the points with Φ = 2 n π , n = 1, 2, 3… are also called zero-phase points. Figure 1d shows that the zero-phase points are distributed on the blue and the green circle, of which the radius are T and 2T respectively. Meanwhile, they have one common circle center. rmax which determines that the size of PCG is the maximum value of r(x, y). In order to ensure two complete PCG periods, a suitable rmax should be chosen.
PCG arrays are utilized to gain more circle centers as feature points for camera calibration due to one PCG only has one center. The array with M rows and N columns filled with uniform PCGs. The spaces between adjacent centers along the horizontal and vertical directions are equal, and their values are known. Let the space be Ds. To avoid interference between adjacent PCGs, we let Ds ≥ 2 rmax. Through the above analysis, for a M × N PCG array, the zero-phase points distribute on M × N concentric circles and there are M × N feature points for calibration. According to perspective projection, the projection of the circle is ellipse [32]. Therefore, the imaged zero-phase points distribute on 2 × M × N ellipses.

3.2. Feature Detection

As mentioned above, the imaged zero-phase points are distributed on 2 × M × N ellipses that are the projections of M × N concentric circles. Thus those ellipses curves would be computed accurately to locate the PCG centers which are used as the feature points. To start with, we provide a solution to separate the PCG from the array. A suitable threshold can be chosen to gain the binary mask Ω via the Equation (20). The Ω for the PCG array can be divided into M × N sub-masks for single PCG using the connected component labeling operation. The sub-image of each PCG therefore can be treated individually.
Ω = { 1 ,   i f   ( I 1 c ( u , v ) + I 2 c ( u , v ) + I 3 c ( u , v ) ) > t h r e s h o l d 0 ,   o t h e r w i s e
According to Equation (19), the wrapped phase φ ( u , v ) of the proposed patterns can be computed as
φ ( u , v ) = arctan ( 3 I 3 c ( u , v ) I 2 c ( u , v ) 2 I 1 c ( u , v ) I 2 c ( u , v ) I 3 c ( u , v ) )
Then, the zero-phase points can be detected via the conventional Canny edge-detection algorithm because of 2π discontinuities. After that, the zero-phase points of each PCG are used to compute two ellipse curves by the least-squares ellipse-fitting algorithm [33]. A sub-mask can identify the wrapped phase of its corresponding PCG. The rough location of the imaged PCG center can be obtained as shown in Section 2.4. Since the edge detection operation could only extract the pixel ellipse, the zero-phase points should be refined to achieve sub-pixel accuracy. By using the constraint between the zero-phase and the radius [26], zero-phase point refinement easily achieves sub-pixel accuracy.
Once the zero-phase point sub-pixel optimization is solved, the high accuracy ellipse curves can be obtained by fitting ellipse with the least-squares method again. Repeating the step of circle center estimation, the real imaged center of PCG can be finally located. The whole map of feature detection of the proposed method is shown in Figure 2.

3.3. Sorting Feature Points

Though the feature points are detected, the camera calibration should be conducted with them in a meaningful order. This section is therefore a crucial step for calibration, since it provides a solution to automatically label the feature points. Then a sorting algorithm is put forward to solve this problem and can be summarized as follows:
  • First of all, the centroid Z of those feature points is computed and the Euclidean distance between Z and the feature points can be used to identify one vertex. Meanwhile, the feature point whose distance is the longest can be regarded as the vertex, let it be A. Using point A and Z as the inputs, we can obtain a straight line l 0 : a 0 u + b 0 v + c 0 = 0 ( a 0 > 0 ) .
  • We define D 0 = a 0 u 0 + b 0 v 0 + c 0 , and ( u 0 , v 0 ) is the coordinate of the feature point. The coordinate of each feature point is substituted into the equation to compute D 0 . The D 0 is a signed float value, thus its maximum and minimum can be directly determined which point is B and D, respectively.
  • We can obtain another straight line l 1 : a 1 u + b 1 v + c 1 = 0 ( a 1 > 0 ) connecting point B and D. Repeating the step 2, D 1 = a 1 u 0 + b 1 v 0 + c 1 can be calculated to locate the point C. Once the four vertexes are determined, we compute the sum of the row and column of each vertex. The minimum and maximum of the sum represent the upper-left point A and the down-right point C respectively. Then, the order of the vertexes can be refined.
  • Since the size of PCG array is known, the planar constraints can be used to order the feature points [34]. Finally, the calibration can be performed using the one-to-one mapping. The scheme of the sorting algorithm is presented in Figure 3.

4. Experiments and Results

In this section, we performed experiments with simulated and real images to verify the effectiveness and accuracy of the presented approach. All the experiments were conducted on a same computer, and the imaged centers of different targets were recovered in the same way as described in this paper.

4.1. Experiment on Simulated Images

In the computer simulation, the simulated images generated based on the principle of the ideal pinhole model are 1920 × 1280 resolution, where the distance between adjacent PCG centers Ds = 375 pixels. However, the array size of virtual PCG array varied with the different simulations. The intrinsic parameters of the simulated camera are
K = [ 2000 0 960 0 2000 640 0 0 1 ]
In the following simulations, we studied the impact of the PCG period and the number of PCGs on calibration accuracy and investigated the performance of PCG array against different noise levels. All the PCG array images used in the simulations are viewed by the simulated camera at 6 orientations. For each experiment, the process was repeated in 20 trials and the result was used to compute the error. The root-mean-square re-projection error (RMSE) was also computed to judge the influence of the above aspects to calibration.
Influence of the number of PCGs. In general, to increase the number of feature points is one of the way to improve the accuracy of calibration. This experiment is designed to study how the number of PCG of the proposed pattern impacts the calibration accuracy. Let the row and column of the PCG array be equal. The dimension of the PCG array is varied from 3 to 8 to change the number of feature points. The period of PCG is 45 pixels. For each number, the images were used to finish calibration with independent Gaussian noises with mean 0 and standard deviation 0.1 pixel. The errors were computed by comparing with the ground truth.
As shown in Figure 4, the mean values of the errors and the RMSEs decrease as the number of PCGs increases. Thus sufficient feature points are essential in our method. Particularly, since the number of feature point was over 6 × 6, the RMSEs and the absolute errors in principal point are almost stable.
Influence of PCG period. As mentioned above, the radius of zero-phase circles is relevant to the PCG period. To change the PCG period is to change its radius of zero-phase circles. To figure out the influence of PCG period to our method and select a suitable PCG period for the real scene, the experiment with different PCG periods was conducted. The virtual PCG arrays contained 5 × 5 uniform PCGs with T = 25, 30, 35, 40, 45, 50, 55, and 60 pixels. Meanwhile, Gaussian noise with zero-mean and standard deviation 0.1 pixel was added to the simulated images.
As it can be seen in Figure 5a,b, the accuracy changes slowly and the maximum differences of relative error in all the parameters are less than 0.05%. Theoretically, we can choose the period as large as we can to ensure the accuracy of center location. However, the size of monitor increases as the PCG period increases. The number of PCG must be reduced simultaneously, which could impact the final result. As shown in Figure 5c, though the RMSEs changed slightly with different period, there is still a suitable period to get higher accuracy. In the real condition, the PCG period is set at 35 pixels to ensure more PCGs displayed on the 1920 × 1080 Liquid Crystal Display (LCD).
Influence of noises. This experiment examined the influence of noises to the accuracy of location. The 5 × 5 PCG arrays with T = 45 pixels were employed to calibrate the simulated camera from six poses. The standard deviation of the Gaussian noises varied from 0 pixels to 0.7 pixels during the experiment. It can be seen from Figure 6a that the relative errors for focal length increase nonlinearly with the noises. When the noises are below 0.45 pixels, our method could show its robustness, but when the image noises are over 0.45 pixels, the errors decrease sometimes. Figure 6b,c shows that the absolute errors and RMSEs increased without regularity as the noise level increased and it cannot indicate the robustness of the proposed method to noise. So, it requires the higher precision of feature point extraction.

4.2. Experiment on Real Images

To verify the performance of our method in real scene, a typical calibration system was set up as shown in Figure 7. The orientation and position of the camera can be adjusted by the device which consists of a turntable and a lift. The images were taken by a Canon EOS-M2 camera with a zoom lens, the resolution of the camera is 1920 × 1280. A LCD of Admiral Oversea Corporation with 1920 × 1080 resolution as the target to display the calibration patterns. To start with, the target was placed at a suitable distance from the camera, and the optical axis of the camera is perpendicular to the screen. The camera was fixed on the device, and a focal length was chosen to capture the sharp image of the pattern. Furthermore, the camera was controlled by a smartphone and the images can be obtained remotely.
Then, two experiments were designed and performed to verify the accuracy of the center location used in our method and the robustness to the defocus of the proposed method respectively. We used the same algorithm as in the simulations for camera calibration. All the operations were performed in the MATLAB.

4.2.1. Accuracy Verification Experiment

The center of projection ellipse is directly regarded as the center of a spatial circle, which is improper under general perspective [15]. In the proposed method, the real imaged center of PCG was formulated as a concentric circle issue to solve, rather than directly using the center of projection ellipse. When the problem was treated as a concentric circle, the real projection can be computed using the method described in Section 2.4 and to verify the accuracy of the center estimation in our method, an experiment with our pattern was designed and performed. During the verification experiment, the rotation angle was changed by the turntable as shown in Figure 7, and the turntable was turned from 0 to 45°, and the images were captured every 15°.
The experiment was carried out with one PCG patterns whose T = 200 pixels. Meanwhile, we made a spot on the PCG center (the white ‘+’ in Figure 8a–c). The captured images with 15°, 30° and 45° were shown in Figure 8a–c. For each position, the centers of inner (the blue ‘×’) and outer (the green ‘×’) ellipses were computed respectively as well as the imaged center located by our method (the red ‘×’). We enlarge the PCG center region in same magnification in Figure 8d–f, and the one-to-one match between the images and the center region is a–d, b–e and c–f. As it can be seen in Figure 8d,f, the centers of ellipses were farther from the real centers as the angle increased but the projection located by our method coincided well with the real projection. Then, we can conclude that the location accuracy of the method is more accurate than the method used in our previous work [25,26]. Thus, the result of this experiment verified the precision of our method and partly illustrated that our work is valuable and a contribution. The method is more suitable in the scene whose rotation angle is large.

4.2.2. Contrast Experiment with the Concentric Circle Array

In this experiment, we took the concentric circle array pattern and the PCG arrays for contrast experiments to illustrate the superiority of the later. Both of them contain 6 × 6 feature points, and their centers have same locations. The parameters of PCG arrays are: a = 0.5, b = 0.5, T = 35 pixels and Ds = 180 pixels. Thus the radius of two concentric circles is 35 pixels and 70 pixels respectively. We properly adjust the focal length and the aperture of the camera to capture the images with three different defocus degrees. For each defocus degree group, we took images of the two targets from fifteen different orientations. The center of PCG and concentric circle is regarded as feature point. Meanwhile, the method to estimate the real imaged center of the two patterns is presented in Section 2.4. However, the sub-pixel ellipse extraction of a concentric circle is extracted by an optimized interpolation algorithm [20].
Figure 9 shows three sets of pattern images of different defocus degrees, and the first set images were captured almost in focus. Another two groups were slight and severe defocusing images respectively. As shown in Figure 10, the images of wrapped phase of PCG with detected feature points, and the red crosses denoted the imaged centers. We estimated the camera intrinsic parameters using the standard calibration method [34]. RMSE of feature points was used to evaluate calibration accuracy. The calibration results of three trials are listed in Table 1. As mentioned above, the first trial is to calibrate the camera using well-focused images. The RMSE in the presented method is a little smaller than concentric circle array. However, the difference between the two methods changed significantly as the defocus degree was severe. Calibration results show that the RMSE of the concentric circle array increased rapidly, but that of PCG was not. Such results indicate that our method can calibrate the camera with high accuracy even in the out-of-focus scene.

5. Conclusions

This paper has presented an accurate camera calibration method that is robust to defocus. By using the insensitivity to image defocusing of fringes, the proposed patterns for display are designed to be three phase-shifted circular grating arrays. The centers of PCGs are used as feature points. Since zero-phase points are distributed on concentric circles, the feature location problem is treated as a concentric circle issue. We estimate feature locations by the pole-polar relationship and algebraic operations rather than using the center of projection ellipse directly, such that the center can be located precisely. Our study gives a solution to the conventional troubles in using defocused images for camera calibration. We also present a sample sorting algorithm to label the feature points. Moreover, it requires just three frames at the same pose comparing with the other fringes patterns. The effectiveness of the proposed method has been validated by experiments with simulated and real images. The superiority allows the calibration using blurred images with a handheld camera. Thus, it is valuable for calibration of long-range vision systems.

Acknowledgments

The authors are grateful to the National Natural Science Foundation of China (Grant Nos. 61275011, 51605130, 51405126) for the financial support.

Author Contributions

The paper was a collaborative effort between the authors. Keyi Wang, Mengchao Ma and Xiangcheng Chen provided guidance during the entire research. Bolin Cai and Yuwei Wang implemented the algorithm, designed and performed the experiments. Bolin Cai analyzed the data and prepared the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yu, F.; Gallup, D. 3D Reconstruction from accidental motion. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014; pp. 3986–3993. [Google Scholar]
  2. Yoon, K.J.; Kweon, I.S. Adaptive support-weight approach for correspondence search. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 650–656. [Google Scholar] [CrossRef] [PubMed]
  3. Royer, E.; Lhuillier, M.; Dhome, M.; Lavest, J.M. Monocular vision for mobile robot localization and autonomous navigation. Int. J. Comput. Vis. 2007, 74, 237–260. [Google Scholar] [CrossRef]
  4. Wilczkowiak, B.M.; Sturm, P.; Boyer, E. Using geometric constraints through parallelepipeds for calibration and 3D modeling. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 194–207. [Google Scholar] [CrossRef] [PubMed]
  5. Tsai, R.Y. A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE J. Robot. Autom. 1987, 3, 323–344. [Google Scholar] [CrossRef]
  6. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
  7. Zhang, Z. Camera calibration with one-dimensional objects. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 892–899. [Google Scholar] [CrossRef] [PubMed]
  8. Wu, C.; Hu, Z.Y.; Zhu, H.J. Camera calibration with moving one-dimensional objects. Pattern Recognit. 2005, 38, 755–765. [Google Scholar] [CrossRef]
  9. Pollefeys, M.; Koch, R.; Gool, L.V. Self-calibration and metric reconstruction in spite of varying and unknown internal camera parameters. In Proceedings of the 1998 Sixth IEEE International Conference on Computer Vision, Bombay, India, 7 January 1998; pp. 90–95. [Google Scholar]
  10. Tan, L.; Wang, Y.; Yu, H.S.; Zhu, J. Automatic camera calibration using active displays of a virtual pattern. Sensors 2017, 17, 685. [Google Scholar] [CrossRef] [PubMed]
  11. Liu, Z.; Wu, Q.; Chen, X.; Yin, Y. High-accuracy calibration of low-cost camera using image disturbance factor. Opt. Express 2016, 24, 24321–24336. [Google Scholar] [CrossRef] [PubMed]
  12. Heikkila, J. Geometric camera calibration using circular control points. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1066–1077. [Google Scholar] [CrossRef]
  13. Tarel, P.J.; Gagalowicz, A. Calibration de caméra à base d’ellipses. Trait. Signal 1995, 12, 177–187. [Google Scholar]
  14. Li, D.; Tian, J. An accurate calibration method for a camera with telecentric lenses. Opt. Lasers Eng. 2013, 51, 538–541. [Google Scholar] [CrossRef]
  15. Kim, J.S.; Gurdjos, P.; Kweon, I.S. Geometric and algebraic constraints of projected concentric circles and their applications to camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 637–642. [Google Scholar] [PubMed]
  16. Kim, J.S.; Kim, H.W.; Kweon, I.S. A camera calibration method using concentric circles for vision applications. In Proceedings of the Fifth Asian Conference on Computer Vision, Melbourne, Australia, 23–25 January 2002; pp. 23–25. [Google Scholar]
  17. Kim, J.S.; Kweon, I.S. A new camera calibration method for robotic applications. In Proceedings of the 2001 IEEE International Conference on Intelligent Robots and Systems, Maui, HI, USA, 29 October–3 November 2001; pp. 778–783. [Google Scholar]
  18. Zhang, B.W.; Li, Y.F.; Chen, S.Y. Concentric-circle-based camera calibration. IET Image Process. 2012, 6, 870–876. [Google Scholar] [CrossRef]
  19. Chen, X.Y.; Hu, Y.; Ma, Z.; Yu, S.; Chen, Y. The location and identification of concentric circles in automatic camera calibration. Opt. Laser Technol. 2013, 54, 185–190. [Google Scholar] [CrossRef]
  20. Yang, S.; Liu, M.; Yin, S.; Guo, Y.; Ren, Y.; Zhu, J. An improved method for location of concentric circles in vision measurement. Measurement 2017, 100, 243–251. [Google Scholar] [CrossRef]
  21. Bell, T.; Xu, J.; Zhang, S. Method for out-of-focus camera calibration. Appl. Opt. 2016, 55, 2346–2352. [Google Scholar] [CrossRef] [PubMed]
  22. An, Y.; Bell, T.; Li, B.; Xu, J.; Zhang, S. Method for large-range structured light system calibration. Appl. Opt. 2016, 55, 9563–9572. [Google Scholar] [CrossRef] [PubMed]
  23. Schmalz, A.C.; Forster, F.; Angelopoulou, E. Camera calibration: Active versus passive targets. Opt. Eng. 2011, 50, 828–832. [Google Scholar]
  24. Huang, L.; Zhang, Q.; Asundi, A. Camera calibration with active phase target: Improvement on feature detection and optimization. Opt. Lett. 2013, 38, 1446–1448. [Google Scholar] [CrossRef] [PubMed]
  25. Wang, Y.W.; Chen, X.C.; Tao, J.Y.; Wang, K.Y.; Ma, M.C. Accurate feature detection for out-of-focus camera calibration. Appl. Opt. 2016, 55, 7964–7971. [Google Scholar] [CrossRef] [PubMed]
  26. Wang, Y.W.; Cai, B.L.; Wang, K.Y.; Chen, X.C. Out-of-focus color camera calibration with one normal-sized color-coded pattern. Opt. Lasers Eng. 2017, 98, 17–22. [Google Scholar] [CrossRef]
  27. Huang, P.S.; Zhang, S. Fast three-step phase-shifting algorithm. Appl. Opt. 2005, 41, 4503–4509. [Google Scholar] [CrossRef]
  28. Conomis, C. Conics-based homography estimation from invariant points and pole-polar relationships. In Proceedings of the 2006 International Conference on 3D Data Processing, Visualization and Transmission, Chapel Hill, NC, USA, 14–16 June 2006; pp. 908–915. [Google Scholar]
  29. Huang, H.; Zhang, H.; Cheung, Y. The common self-polar triangle of concentric circles and its application to camera calibration. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 4065–4072. [Google Scholar]
  30. Lu, L.; Xi, J.; Yu, Y. Shadow removal method for phase-shifting profilometry. Appl. Opt. 2015, 54, 6059–6064. [Google Scholar] [CrossRef] [PubMed]
  31. Xue, J.; Su, X.Y.; Xiang, L.; Chen, W. Using concentric circles and wedge grating for camera calibration. Appl. Opt. 2012, 51, 3811–3816. [Google Scholar] [CrossRef] [PubMed]
  32. Meng, X.; Hu, Z. A new easy camera calibration technique based on circular points. Pattern Recognit. 2002, 36, 1155–1164. [Google Scholar] [CrossRef]
  33. Fitzgibbon, A.; Pilu, M.; Fisher, R.B. Direct least square fitting of ellipses. Pattern Anal. Mach. Intell. 1999, 21, 476–480. [Google Scholar] [CrossRef]
  34. Bouguet, J.-Y. Camera Calibration Toolbox for Matlab. Available online: http://www.vision.caltech.edu/bouguetj/calib_doc/index.html (accessed on 20 July 2017) .
Figure 1. The phase-shifting circular patterns. (ac) Images of the 4 × 4 PCG array patterns; (d) Wrapped phase of single PCG.
Figure 1. The phase-shifting circular patterns. (ac) Images of the 4 × 4 PCG array patterns; (d) Wrapped phase of single PCG.
Sensors 17 02361 g001
Figure 2. The map of the feature detection of the proposed method.
Figure 2. The map of the feature detection of the proposed method.
Sensors 17 02361 g002
Figure 3. The schemes of the sorting algorithm. (a) To solve B and D using the straight line l0 and the final point C using the straight l1; (b) Reordering the vertexes and labeling the feature points.
Figure 3. The schemes of the sorting algorithm. (a) To solve B and D using the straight line l0 and the final point C using the straight l1; (b) Reordering the vertexes and labeling the feature points.
Sensors 17 02361 g003
Figure 4. Errors regarding the number of PCGs. (a) Relative error for focal length; (b) Absolute error for principal point; (c) RMSEs with different number of PCGs.
Figure 4. Errors regarding the number of PCGs. (a) Relative error for focal length; (b) Absolute error for principal point; (c) RMSEs with different number of PCGs.
Sensors 17 02361 g004
Figure 5. Errors regarding the period of PCG. (a) Relative error for focal length; (b) Absolute error for principal point; (c) RMSEs with different periods of PCG.
Figure 5. Errors regarding the period of PCG. (a) Relative error for focal length; (b) Absolute error for principal point; (c) RMSEs with different periods of PCG.
Sensors 17 02361 g005
Figure 6. Errors regarding the noise level of the patterns. (a) Relative error for focal length; (b) Absolute error for principal point; (c) RMSEs with different noise levels.
Figure 6. Errors regarding the noise level of the patterns. (a) Relative error for focal length; (b) Absolute error for principal point; (c) RMSEs with different noise levels.
Sensors 17 02361 g006
Figure 7. Set up of the real experiment.
Figure 7. Set up of the real experiment.
Sensors 17 02361 g007
Figure 8. Captured images of different rotation angle and the images of enlarged region of PCG center. (ac) The images of 15°, 30° and 45° respectively; (df) The enlarged region of PCG center of 15°, 30° and 45° respectively.
Figure 8. Captured images of different rotation angle and the images of enlarged region of PCG center. (ac) The images of 15°, 30° and 45° respectively; (df) The enlarged region of PCG center of 15°, 30° and 45° respectively.
Sensors 17 02361 g008
Figure 9. The captured images of different defocus degrees. (ac) The images of PCG arrays of different defocus degrees respectively; (df) The images of concentric circle array of different defocus degrees respectively.
Figure 9. The captured images of different defocus degrees. (ac) The images of PCG arrays of different defocus degrees respectively; (df) The images of concentric circle array of different defocus degrees respectively.
Sensors 17 02361 g009
Figure 10. Wrapped phase with the calculated imaged centers of different defocus degrees. (ac) The wrapped phase of Figure 9a–c respectively.
Figure 10. Wrapped phase with the calculated imaged centers of different defocus degrees. (ac) The wrapped phase of Figure 9a–c respectively.
Sensors 17 02361 g010
Table 1. Calibration result for real images using two patterns.
Table 1. Calibration result for real images using two patterns.
Patternfufvu0v0k1k2RMSE
Trial 1PCG arrays2748.0122747.892982.443616.473−0.0100.0990.045
Concentric circle array2745.6812745.370982.433615.732−0.0120.1030.054
Trial 2PCG arrays2732.6752732.785980.751615.591−0.0120.1050.048
Concentric circle array2721.3532720.358972.573609.420−0.0410.1820.136
Trial 3PCG arrays2674.0152674.918974.139614.406−0.0470.1740.057
Concentric circle array2680.2222678.179974.413616.083−0.0600.2430.179

Share and Cite

MDPI and ACS Style

Cai, B.; Wang, Y.; Wang, K.; Ma, M.; Chen, X. Camera Calibration Robust to Defocus Using Phase-Shifting Patterns. Sensors 2017, 17, 2361. https://doi.org/10.3390/s17102361

AMA Style

Cai B, Wang Y, Wang K, Ma M, Chen X. Camera Calibration Robust to Defocus Using Phase-Shifting Patterns. Sensors. 2017; 17(10):2361. https://doi.org/10.3390/s17102361

Chicago/Turabian Style

Cai, Bolin, Yuwei Wang, Keyi Wang, Mengchao Ma, and Xiangcheng Chen. 2017. "Camera Calibration Robust to Defocus Using Phase-Shifting Patterns" Sensors 17, no. 10: 2361. https://doi.org/10.3390/s17102361

APA Style

Cai, B., Wang, Y., Wang, K., Ma, M., & Chen, X. (2017). Camera Calibration Robust to Defocus Using Phase-Shifting Patterns. Sensors, 17(10), 2361. https://doi.org/10.3390/s17102361

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop