Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Memory Efficient VLSI Implementation of Real-Time Motion Detection System Using FPGA Platform
Next Article in Special Issue
Pattern Reconstructability in Fully Parallel Thinning
Previous Article in Journal
Real-Time FPGA-Based Object Tracker with Automatic Pan-Tilt Features for Smart Video Surveillance Systems
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Multi-Projector Calibration Method for Virtual Reality Simulators with Analytically Defined Screens

Institute of Robotics and Information and Communication Technologies (IRTIC), Universitat de València, 46980 València, Spain
*
Author to whom correspondence should be addressed.
J. Imaging 2017, 3(2), 19; https://doi.org/10.3390/jimaging3020019
Submission received: 27 February 2017 / Revised: 6 April 2017 / Accepted: 31 May 2017 / Published: 3 June 2017
(This article belongs to the Special Issue Computer Vision and Pattern Recognition)

Abstract

:
The geometric calibration of projectors is a demanding task, particularly for the industry of virtual reality simulators. Different methods have been developed during the last decades to retrieve the intrinsic and extrinsic parameters of projectors, most of them being based on planar homographies and some requiring an extended calibration process. The aim of our research work is to design a fast and user-friendly method to provide multi-projector calibration on analytically defined screens, where a sample is shown for a virtual reality Formula 1 simulator that has a cylindrical screen. The proposed method results from the combination of surveying, photogrammetry and image processing approaches, and has been designed by considering the spatial restrictions of virtual reality simulators. The method has been validated from a mathematical point of view, and the complete system—which is currently installed in a shopping mall in Spain—has been tested by different users.

Graphical Abstract">

Graphical Abstract

1. Introduction

Projectors are used in scientific visualizations, virtual and augmented reality systems, structured light techniques and other visually intensive applications. In the last few years, a number of approaches have been proposed in order to calibrate projectors. As pointed out in Brown et al. [1], calibration of a projector can be achieved by two different strategies: (1) through mechanical and electronic alignment; (2) by using one or more cameras that observe one or a set of projected images. While the first strategy may be more accurate and easier for the non-expert user, it brings several drawbacks, e.g., the requirement of a special infrastructure and/or resources that can considerably increase the system’s costs. The calibration hardware is no longer needed when the system is calibrated and can be a hindrance. On the contrary, the second approach is less flashy and less time-consuming, although it requires a solid software background. Due to that, several vision-based techniques have been recently proposed to calibrate projectors, some of them being introduced in the following lines.
Many works on projector or multi-projector displays are based on planar surfaces [2,3,4,5,6,7,8], for which automated geometric correction and alignment is simplified through the use of planar homographies between the planar screen, the projector frame buffers, and the images of one or more cameras observing the screen. Many of these developments use chessboards-based planar references alone or with a combination of other planar surfaces to automatically measure 2D points (i.e., image points) and establish point correspondences [8,9,10,11,12,13,14]. Other authors use augmented reality markers to establish such correspondences or other kind of self-designed planar markers [15,16,17,18]. In these developments, usually a physical pattern exists onto where another pattern is projected. The physical pattern is used to compute camera calibration, while the projected pattern is used to compute projector calibration. Some authors use a more complex mathematical background, as the one given in Knyaz [18], which uses bundle adjustment to derive all unknowns in a single step. As these implementations are based on planar homographies, several views from the reference pattern are needed.
On the other hand, fewer works can be found where non-planar surfaces are used to achieve calibration. In Raskar et al. [19], projective geometry is used to fully calibrate a camera pair, where a 3D calibration pattern with spatially-known control points (CPs) is used. These points are used to establish 2D-3D correspondences with the related camera image points, and thus compute the projection matrix for each camera. Once both cameras are calibrated, they use the projector to project a grid, on a point-by-point basis. Applying stereo-pair triangulation from the images captured by the 2-camera stereoscopic system, the spatial coordinates of those points are computed, and therefore the projection screen is geometrically defined. Finally, they establish 2D-3D correspondences to fully calibrate the projector in the same way as the cameras.
Other authors make use of structured light techniques. For instance, in Tardif et al. [20] an approach is presented that allows one or more projectors to display an undistorted image on a surface of unknown geometry. Structured light patterns are used to compute the relative geometries between camera and projector, and thus no explicit calibration is used. In Harville et al. [21], a method is proposed to project imagery without distortion onto a developable surface—e.g., flat walls, piecewise-planar shapes, cylindrical and conical sections—in such a way that the images to be displayed appear like a wallpaper on the display surface. Camera-projector correspondences are obtained by using a structured light approach that is based on projecting a sequence of bar images (8 to 12 images) of increasingly fine spatial frequency to temporally encode the projector coordinates corresponding to various camera pixels. Since they demonstrate that it is not necessary to obtain the 3D shape of the projection screen, it is only applied to reproduce a wallpaper effect. An improved approach is presented in Sun et al. [22] by combining the advantages of global surface fitting and homographies to generate high accuracy geometric corrections that are independent of the calibration camera’s location and viewing angle.
A different approach is introduced in Sajadi and Majumder [23], where spatial geometric relationships are established to derive the exterior orientation of a camera and the interior and exterior orientation of a projector. They use some assumptions, as known interior camera orientation parameters, known shape of the projection screen—which is a vertically extruded cylinder surface—and known aspect ratio of the rectangle formed by the four corners of the screen. A more generic approach is presented in Sajadi and Majumder [24] for any kind of extruded surfaces.
More recently, Zhao et al. [25] introduced a two-step approach based on Bézier patches to calibrate projectors on cylindrical surfaces. In the first step, a rough calibration is performed by projecting a total of eight encoded images per projector onto the surface. In a second step, an accurate calibration is performed to correct the errors in the overlap region of adjacent projectors by using the Bézier surface to slightly distort the projected images. In Chen et al. [26], a method to calibrate a multi-projector light field is presented that consists of transferring the calibration of a 3D scene into the calibration of a 2D image on a diffuser interface, a curved screen. Their setup includes a set of printed and projected points and a precise rotary table where a CCD camera is fixed.
The aim of our research work is to design and deploy a multi-projector calibration method that is not time-consuming and easy-to-use, while relaying on any kind of analytically defined surfaces which can be found in virtual reality simulators. In the scope of the paper, the implemented method is tested and validated with a Formula 1 (F1) virtual reality simulator that has a cylindrical screen and where the image is formed with the conjunction of three projectors. The mathematical background of our approach is based on a combination of surveying, photogrammetry and image processing approaches, and has been designed by considering the geometric restrictions of virtual reality simulators, where the available space is limited and no special infrastructure can be added to calibrate the projectors. Our method combines the simplicity of acquiring in-situ data with inexpensive devices with the simplicity of its mathematical formulation, and thus the procedure can be carried out by non-experienced users.

2. Materials and Methods

2.1. Hardware Components

As introduced above, the designed method has been tested and validated with a F1 virtual reality simulator, whose construction details and characteristics are here explained.
The simulator has been designed and constructed at the Institute of Robotics and Information and Communication Technologies (IRTIC) of the Universitat de València with off-the-shell components (Figure 1, left). It consists of a motion platform with 6 DoF (Degrees of Freedom), a replica of a F1 pilot seat and a cylindrical surface where virtual contents are projected. The motion platform is able to reproduce accelerations up to 0.8 G with rotational limits of 35° and longitudinal displacements of ±150 mm. It uses the classical washout algorithm [27], the parameters of which are tuned by a genetic algorithm [28] to provide a standard setup, valid for most users. The system provides 330° of horizontal FoV (Field of View)—although the current configuration is set-up for 280° and 55° of vertical FoV, where the virtual contents are displayed by the projection of three projectors (Figure 1, right). Projectors are placed upside-down and they have Full HD resolution each. The total pixel resolution of the system is, thus, 5760 × 1080 pixels.

2.2. Multi-Projector Calibration Method with Analytically Defined Screens

The multi-projector calibration method here proposed involves approaches from surveying, photogrammetry and image processing. A flowchart of the complete methodology is depicted in Figure 2, where the main procedures that are explained in the following sub-sections are highlighted. Overall, a single camera is used in order to calibrate the three projectors (Proj1, Proj2 and Proj3 in Figure 3), which is placed in three different positions (Cam1, Cam2 and Cam3 in Figure 3). In a first step, the camera interior and exterior orientation parameters have to be calculated, as they are needed to calibrate the projectors. For the external camera calibration, 2D/3D correspondences of a set of Control Points (CPs) are needed, whose object coordinates can be computed from measured distances in the object space. To calibrate the projector, the Direct Linear Transformation (DLT) equations are followed, from which interior and exterior orientation parameters can be derived.

2.2.1. Compensated 3D Coordinates of CPs

Each CP consists of a physical point on the screen. For the sake of simplifying the measuring procedure, the CPs are placed in two rows of vertically aligned pairs, as depicted in Figure 3 right, for the case of a cylindrical screen, where CPs from 1 to 8 are the minimum number of required CPs (four per camera), while CPs from 9 to 14 are optional but recommendable, with the purpose of having redundancies. A set of distances are measured with a measuring tape to calculate the 3D coordinates of the CPs, from each one of the CPs to the rest in a single row; distances of the second row are considered equal, as CPs are vertically aligned. These distances are depicted in Figure 3, where continuous lines represent distances from required CPs, and dotted lines represent distances from additional CPs. Additionally, the height between the two rows of vertical CPs, which is constant, is required. This procedure can be done at the laboratory as the cylindrical screen remains with a constant shape. In such a case, the calibration procedure in situ is considerably faster.
Once we have the measured distances, the 3D coordinates of CPs can be calculated, which will be approximate as the measured distances have not yet been compensated for. In order to derive the 3D coordinates, we establish a coordinate system whose origin is located in the intersection of the horizontal plane defined by the bottom CPs (CP5, CP12, etc.) and the central axes of the cylinder. The coordinate system is vertically aligned in its Y-axis. The X-axis is parallel to the direction CP6-CP7, the Z-axis is perpendicular to the plane defined by the X-axis and the Y-axis, and the three axes define a right-handed coordinate system, as depicted in Figure 4. In such a coordinate system, the 3D approximate coordinates of control points can be derived by applying simple geometrical rules, as indicated in Table 1, where d = r 2 ( a / 2 ) 2 , m = a 2 + b 2 c 2 2 a , n = b 2 m 2 . The radius of the cylinder is also calculated applying simple geometrical rules, from the triangle defined by the distances a, b and c.
Redundant measurements are considered with the purpose of reducing errors in the calculation of CPs after applying a compensation with a Least Squares Fitting (LSF). In that way, errors due to both the limited accuracies of the measuring device (measuring tape) and to the local mechanical distortions introduced during the construction of the cylindrical screen, can be reduced. The mathematical model of LSF is depicted in Equation (1) in the form of indirect observations [29]. In this equation, A is the design matrix, X is the vector of the unknowns, R is the vector of the residuals and K is the vector holding the independent factors. The design matrix A is constructed from the equation shown in (2), which represents the mathematical form of an observed distance. In this equation, l i j is the approximated calculated value of the distance between i and j, d l i j is the differential value of l i j , θ i j c a is the approximated calculated azimuth of the distance l i j and dzj, dzi, dxj and dxi are the unknowns, which are the corrections to the coordinates X and Z of points j and i (any pair of CPs). The equation of the azimuth is given in (3). The solution of the unknowns (dzj, dzi, dxj and dxi) is depicted in Equation (4), which is known as the normal equations. Finally, the X and Z compensated coordinates of CPs are calculated as shown in (5) and (6). Note that the Y coordinates are not compensated in this procedure, as the height h is directly measured on the screen surface, and thus is acquired with more accuracy.
A ( d i s t a n c i e s , C P * 2 ) X ( C P * 2 , 1 ) R ( C P * 2 , 1 ) = K ( C P * 2 , 1 )
d l i j = d z j · c o s θ i j c a d z i · c o s θ i j c a + d x j · s i n θ i j c a d x i · s i n θ i j c a
θ i j c a = a r c t g x j x i z j z i
X = ( A T A ) 1 A T K
C P i . X c o m p e n s a t e d = C P i . X a p p r o x i m a t e + d x i
C P i . Z c o m p e n s a t e d = C P i . Z a p p r o x i m a t e + d z i
As an alternative to the measuring process here explained, the CPs could be directly derived with other surveying methods. For instance, a total station could be used instead, that directly derives 3D object coordinates. However, this would require the use of an expensive device and the need of an experienced user to properly collect data. This is precisely one of the issues that we would like to avoid and that motivates the development of our method. Additionally, due to the spatial limitations of most virtual reality simulators, measurements with these devices may not be feasible.

2.2.2. 2D/3D Correspondences of CPs

CPs image coordinates (or 2D coordinates) and the assignment of correspondences is done in a semi-automated manner. In the first place, CPs are automatically extracted from the images acquired by the cameras by detecting black points on the white surface. To that end, the OpenCV library [30] has been used. The computation of the 2D coordinates with sub-pixel resolution is straightforward with pattern-based image processing techniques available in OpenCV. In order to assign the 2D/3D correspondences, the user has to interactively select the corresponding CP identifiers at each image.

2.2.3. Camera Interior Orientation

To determine the interior orientation camera parameters, an algorithm has been implemented that makes use of the OpenCV library. In this approach the interior orientation procedure is based on the Zhang method [31] to solve for the focal length and principal point offsets (c, x0 and y0). On the other hand, the tangential and radial distortion coefficients are computed following the method described in [32]. The equations used in OpenCV for correcting radial distortion are shown in Equations (7) and (8), where k1, k2 and k3 are the computed radial distortion coefficients. The tangential distortion is corrected via Equations (9) and (10), where p1 and p2 are the computed tangential distortion coefficients. The radius r is the distance from the distorted image point under consideration to the distortion center.
x c o r r e c t e d = x ( 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 )
y c o r r e c t e d = y ( 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 )
x c o r r e c t e d = x + [ 2 p 1 x y + p 2 ( r 2 + 2 x 2 ) ]
y c o r r e c t e d = y + [ p 1 ( r 2 + 2 y 2 ) + 2 p 2 x y ]
To achieve the intrinsic (interior orientation) parameters, multiple 2D-to-2D correspondences of a chessboard planar object viewed from different angles are needed, where a minimum of 10 views is recommended. This procedure needs to be done only once, and can be previously performed offline at the laboratory, as the interior orientation of the camera remains constant.

2.2.4. Camera Exterior Orientation

The exterior camera orientation parameters are computed with the PnP (Perspective-n-Point) algorithm, which is implemented in the OpenCV library and makes use of RANSAC (RANdom SAmple Consensus). The method needs the input of a set of CPs whose image (2D) and object (3D) coordinates are known. The information of the camera interior orientation is also required, which has been calculated in the previous point.
While positioning the camera in the scenario to calibrate the three projectors, it has to be taken into account that the FoV of each camera position has to include the FoV of each projector, as this will be required when calibrating the projector. As there exists overlapping between the camera in positions 1 and 2, and between the camera in positions 2 and 3, a minimum of 8 CPs are required to calibrate the exterior orientation of the camera at the three positions (CPs from 1 to 8 in Figure 3).

2.2.5. 2D/3D Correspondences of Chessboard

The 2D-3D correspondences required to calibrate the projectors are calculated from the set of points derived from a projected chessboard. Therefore, for the purpose of calibrating the projectors, an image of the projected chessboard on the cylinder is needed for each projector, meaning a total of three images, from which 3D coordinates will be computed by applying ray tracing: departing from the known interior and exterior camera calibration, rays are traced from the camera optical center (black dot in Figure 5) and through each of the chessboard points at the camera image plane (green dots in Figure 5). The intersection of each spatial ray with the vertical cylinder, which is analytically defined, results in the 3D coordinates of that point (blue dots in Figure 5).
On the other hand, the projected image for each projector is also needed to compute 2D image coordinates of each chessboard point. The computation of the 2D coordinates with sub-pixel resolution is straightforward with pattern-based image processing techniques available in OpenCV.

2.2.6. Projector Calibration

The calibration of the projectors relies on the Direct Linear Transformation (DLT) equations. The basic model equations of the DLT are depicted in Equations (11) and (12), where x, y are the observable image coordinates of a point, X, Y, Z are the spatial coordinates of that object point and ai, bi, ci are the 11 DLT parameters of a particular image. As one observed point provides 2 equations, a minimum of 6 points are needed to solve the 11 unknowns. It is known that the DLT parameters can be directly related to the six elements of the exterior orientation parameters of an image (X0, Y0, Z0, and orientation angles: camera direction α, nadir distance ν and swing κ) and to five elements of the interior orientation (principal point coordinates x0, y0, focal length c, relative y-scale λ and shear d) [33,34]. Therefore, solving these equations for a minimum of 6 observed points (points with 2D-3D correspondences) leads to the computation of the interior and exterior sensor orientation. If more points are available, the system can be solved with a LSF. Finally, it is worth mentioning that the DLT fails if all CPs lie in one plane. This situation cannot occur here, as CPs lie on a curved surface, the vertical cylinder.
x = a 1 X + a 2 Y + a 3 Z + a 4 c 1 X + c 2 Y + c 3 Z + 1
y = b 1 X + b 2 Y + b 3 Z + b 4 c 1 X + c 2 Y + c 3 Z + 1

3. Results

3.1. Validation

The system was installed in one of the main shopping malls in València and calibrated by non-experts in-situ. The camera used in the procedure was a Canon G12 acquiring images of 2816 × 1880 resolution, which was previously calibrated at the laboratory. A total of 14 CPs were previously placed on the cylindrical surface, as indicated in Figure 3. The measured distances are depicted in Table 2, where the condition that all points lie on a circle has been considered by introducing the so-called fictitious observations, that in this case are distances from the center of the circle O to all CPs with the known value of the radius (as given by the manufacturer). After a LSF procedure, the compensated distances were derived (Table 3), from which the computation of the compensated 3D coordinates of CPs is straightforward. The computed mean value of the compensated radius was 1.569 m, which is the value used in the computation of the projector calibration.
The computed interior orientation parameters of the camera and the distortion coefficients are depicted in Table 4, which have been derived as explained in Section 2.2.3. The exterior orientation parameters of the camera at the three positions are depicted in Table 5, which have been derived as explained in Section 2.2.4.
The computed interior orientation parameters of the three projectors are depicted in Table 6, whereas the exterior orientation parameters are depicted in Table 7. These values have been derived from the DLT parameters, as explained in Section 2.2.6.
The computed average value of image discrepancies for the three projectors were 0.533 pixels, 1.056 pixels and 0.896 pixels, respectively. Individual discrepancies are depicted in Figure 6, where a scale factor of ×50 has been applied in order to visualize the direction of the errors. As it can be noticed, there is not a predominant direction, meaning that systematic errors are not present. It can also be observed that errors at the image borders are greater than at the center of the images, but still represent low values that can be neglected.
Once the projector orientation parameters have been obtained, it is possible to project onto the cylindrical surface (whose geometry is known) any point in a known 3D position. In order to further check the accuracies in object space, the computed CPs were projected on top of the physical CPs. Small discrepancies (less than 0.5 mm) were observed in object space for all cases. In Figure 7, an image with different CPs and another detail image of CP2 are depicted, where “+” was used for Proj1 and Proj3 and “×” was used for Proj2.

3.2. Image Warping

Some images were warped to the cylindrical surface with a wallpaper-like mapping, which are depicted in Figure 8. This mapping is so called because it produces the same effect as if the image was printed on a paper and attached to the cylindrical surface following its curvature. In our implementation, the user can choose the height of the projection on the cylindrical surface and the horizontal FoV. In Figure 8 (top) a grid is shown, where the image of the central projector is in cyan and those of the lateral projectors are in yellow. In Figure 8 (bottom), an urban space is shown, where the overlapping between each pair of projectors is shown as it is, without applying any kind of blending in order to depict the common areas.
Although it is not part of the aim of this paper, it has to be mentioned that the overlapping areas of the projected images have to be blended. Due to the nature of most of the virtual reality simulators, the optic flow generated by the images on the screen is expected to be significantly variable (it can vary from very fast to very small). This means that the blending zone needs to be very accurate so that the mid-peripheral vision seems right to the user.
The blending process implies both a refined geometrical matching (in the overlapping areas) and a smooth photometric blending. The matching of the warping meshes obtained from the calibration is quite accurate in the overlapping areas, yet there could be room for an ultra-fine tuning. Regarding the photometric blending, a shader-based application was used to perform a linear luminance interpolation and to map the corrected wallpaper from the driving simulation output. This software reads the data obtained from the calibration process and allows also to perform small geometrical corrections to the warping mesh.

3.3. Testing the Simulator with Users

The overall system was installed in one of the main shopping malls in Valencia and tested by twenty local non-professional drivers. Each driver was prompted to use the simulator for 5 min, to avoid long runs that sometimes lead to simulator sickness. For the virtual content, we used both rFactor 2 and F1 2012, using a simulated Formula 1 car and the Reid-Nahon classical washout algorithm for the motion cueing generation.
The projection system and the screen are welded structures. The projectors and the motion platform are tightly attached to the projection structure by a series of large bolts and nuts, so the motion of the projectors with respect to the screen and the motion base is negligible. In the performed tests, none of the drivers complained about the visual perception and the matching of the images displayed by the projectors. Minor complaints were reported from some users about the lack of brightness of the image and about the vibrations of the motion cueing generation. Neither of these issues are related with the calibration process. Figure 9 shows the final aspect of the system.

4. Discussion

The proposed method is a waterfall algorithm, meaning that errors of a step would not be corrected in the later process. For this reason, it is important to achieve accurate values in all the intermediate steps. For instance, if 3D coordinates of control points are not accurately achieved, the accuracy of the result might drastically worsen. In case that 3D coordinates of control points can be achieved with a total station, the errors introduced in this step can be neglected. However, if the 3D coordinates are geometrically computed after measuring distances with a measuring tape, as the sample shown here, we consider it mandatory to observe redundant distances in order to perform the LSF. In order to show the discrepancies in the computed projector orientation if LSF is not applied, we have simulated our method making use of the 3D approximate coordinates of CPs, instead of the compensated ones. The obtained discrepancies are depicted in Table 8 and Table 9. As it can be seen, results differ in a significant way. For instance, in case of Proj. 1 and Proj. 2, there is a discrepancy of 15 cm in the coordinate X0, while differences in angular values arrive to 10 degrees in some cases.
It is also relevant to mention that the reason why we used three camera positions is because of the spatial restrictions imposed by the F1 simulator. However, this method could be used to calibrate multiple projectors with a single camera position, provided that the FOV of the camera captures the whole screen at once. In such a case, only one input image and a minimum of 4 CPs would be needed to calibrate the camera. Once the camera is calibrated, the rest of the method can be applied as here explained.
Finally, we would also like to mention that, although the method here presented works for any kind of analytically defined screens, it could be easily adapted to any kind of screens, even irregular surfaces. In this case, instead of mathematically defining the surfaces, the surfaces can be given as a cloud of points. Thus, in order to achieve the 3D chessboard object coordinated of CPs, ray tracing can be applied as here proposed, but by intersecting the cloud of points. This is easily solved by just finding the closest point to the 3D ray. We propose this as future work.

5. Conclusions

Multi-projector setups are used in many different applications, such as virtual reality visualizations, projected augmented reality or 3D reconstruction, among others. This paper presents a fast and easy-to-use method to calibrate a multi-projector system for analytically defined screens.
We have developed and presented the equations for a cylindrical screen, and tested the process in a real case where multi-project calibration is needed: a real-time Formula 1 simulator.
The contribution of our work lies in the simplicity of the process from the point of view of the person in charge of performing the calibration process. We also perform the calibration on the actual screen being calibrated, without the need to move it or use auxiliary surfaces. The measuring process is easy and requires neither dedicated infrastructures nor complex tools. In contrast to other calibration methods, our calibration process can be performed by any person capable of using a measuring tape. The most complicated step may be the camera calibration, although this is done only once and can be performed offline at the laboratory.
Moreover, the time needed to obtain the calibration parameters is also kept small, although electromechanical calibration methods are usually faster (but much more expensive). Both the measuring and the computation process require little time to complete. In fact, the mathematical methods needed to complete the process are computable in a few seconds by a modern computer, so the more time-consuming task of our calibration method could be the manual measuring of the distances between the CPs.
While being simple, the method is accurate enough for most of the applications where calibration is needed. Image discrepancies are around or less than one pixel, and systematic errors are not present. Our aim is entertainment and virtual reality, areas where the amount of accuracy we obtained are sufficient. Other scientific areas, such as metrology, may require higher accuracy. However, accuracy comes at a cost.
In addition, although we presented here the evaluation for a cylindrical screen, the method could be applied to other analytical shapes, such as spheres or conical surfaces. Our method cannot be applied to planar screens (at least in the current form presented here), as DLT equations fail on co-planar points. However, we are not interested in this type of screens, because the majority of screens used in immersive virtual reality applications are either cylindrical or spherical. This is even truer in real-time simulators, like the one we used for our tests. The method can also be applied to non-analytical surfaces by making use of cloud point ray tracing, but these kinds of surfaces are unusual in virtual reality or entertainment applications.

Author Contributions

All authors contributed equally to the design of the calibration method. Cristina Portalés and Sergio Casas dealt with the implementation and tests. Marcos Fernández and Inmaculada Coma aid in the processing of data. All authors have contributed to the paper writing and reviews.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Brown, M.; Majumder, A.; Yang, R. Camera-based calibration techniques for seamless multiprojector displays. IEEE Trans. Vis. Comput. Gr. 2005, 11, 193–206. [Google Scholar] [CrossRef] [PubMed]
  2. Chen, H.; Sukthankar, R.; Wallace, G.; Li, K. Scalable alignment of large-format multi-projector displays using camera homography trees. In Proceedings of the IEEE Visualization, Boston, MA, USA, 27 October–1 November 2002; pp. 339–346. [Google Scholar]
  3. Raij, A.; Pollefeys, M. Auto-calibration of multi-projector display walls. In Proceedings of the 17th International Conference on Pattern Recognition, Cambridge, UK, 23–26 August 2004; pp. 14–17. [Google Scholar]
  4. Okatani, T.; Deguchi, K. Autocalibration of an ad hoc construction of multi-projector displays. In Proceedings of the 2006 Conference on Computer Vision and Pattern Recognition Workshop (CVPRW’06), New York, NY, USA, 17–22 June 2006; p. 8. [Google Scholar]
  5. Zhang, B.; Li, Y.F.; Wu, Y.H. Self-recalibration of a structured light system via plane-based homography. Pattern Recognit. 2007, 40, 1368–1377. [Google Scholar] [CrossRef]
  6. Orghidan, R.; Salvi, J.; Gordan, M.; Florea, C.; Batlle, J. Structured light self-calibration with vanishing points. Mach. Vis. Appl. 2014, 25, 489–500. [Google Scholar] [CrossRef]
  7. Huang, Z.; Xi, J.; Yu, Y.; Guo, Q. Accurate projector calibration based on a new point-to-point mapping relationship between the camera and projector images. Appl. Opt. 2015, 54, 347–356. [Google Scholar] [CrossRef]
  8. Portalés, C.; Ribes-Gómez, E.; Pastor, B.; Gutiérrez, A. Calibration of a camera–projector monochromatic system. Photogramm. Rec. 2015, 30, 82–99. [Google Scholar] [CrossRef]
  9. Ashdown, M.; Sato, Y. Steerable projector calibration. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 21–23 September 2005; p. 8. [Google Scholar]
  10. Kimura, M.; Mochimaru, M.; Kanade, T. Projector calibration using arbitrary planes and calibrated camera. In Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 17–22 June 2007; pp. 1–2. [Google Scholar]
  11. Chien, H.J.; Chen, C.Y.; Chen, C.F. A target-adapted geometric calibration method for camera-projector system. In Proceedings of the 2010 25th International Conference of Image and Vision Computing New Zealand, Queenstown, New Zealand, 8–9 November 2010; pp. 1–8. [Google Scholar]
  12. Park, S.-Y.; Park, G.G. Active calibration of camera-projector systems based on planar homography. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 Augest 2010; pp. 320–323. [Google Scholar]
  13. Fernandez, S.; Salvi, J. Planar-based camera-projector calibration. In Proceedings of the 2011 7th International Symposium on Image and Signal Processing and Analysis (ISPA), Dubrovnik, Croatia, 4–6 September 2011; pp. 633–638. [Google Scholar]
  14. Moreno, D.; Taubin, G. Simple, accurate, and robust projector-camera calibration. In Proceedings of the 2012 Second International Conference on 3D Imaging, Modeling, Processing, Visualization & Transmission, Zürich, Switzerland, 13–15 October 2012; pp. 464–471. [Google Scholar]
  15. Gockel, T.; Azad, P.; Dillmann, R. Calibration issues for projector-based 3d-scanning. In Proceedings of the Shape Modeling Applications, Genova, Italy, 7–9 June 2004; pp. 367–370. [Google Scholar]
  16. Liao, J.; Cai, L. A calibration method for uncoupling projector and camera of a structured light system. In Proceedings of the 2008 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, Xian, China, 2–5 July 2008; pp. 770–774. [Google Scholar]
  17. Hong, W.; Gelb, D.; Trott, M. Automatic calibration of a projector-camera system with a see-through screen. In Proceedings of the 2012 19th IEEE International Conference on Image Processing, Orlando, FL, USA, 30 September–3 October 2012; pp. 337–340. [Google Scholar]
  18. Knyaz, V.A. Automated calibration technique for photogrammetric system based on a multi-media projector and a ccd camera. In Proceedings of the ISPRS Commission V Symposium Image Engineering and Vision Metrology, Dresden, Germany, 25–27 September 2006. [Google Scholar]
  19. Raskar, R.; Brown, M.S.; Ruigang, Y.; Wei-Chao, C.; Welch, G.; Towles, H.; Scales, B.; Fuchs, H. Multi-projector displays using camera-based registration. In Proceedings of the Conference on Visualization ’99: Celebrating Ten Years, San Francisco, CA, USA, 24–29 October 1999; pp. 161–522. [Google Scholar]
  20. Tardif, J.P.; Roy, S.; Trudeau, M. Multi-projectors for arbitrary surfaces without explicit calibration nor reconstruction. In Proceedings of the Fourth International Conference on 3-D Digital Imaging and Modeling, Banff, AB, Canada, 6–10 October 2003; pp. 217–224. [Google Scholar]
  21. Harville, M.; Culbertson, B.; Sobel, I.; Gelb, D.; Fitzhugh, A.; Tanguay, D. Practical methods for geometric and photometric correction of tiled projector. In Proceedings of the 2006 Conference on Computer Vision and Pattern Recognition Workshop (CVPRW’06), New York, NY, USA, 17–22 June 2006; p. 5. [Google Scholar]
  22. Sun, W.; Sobel, I.; Culbertson, B.; Gelb, D.; Robinson, I. Calibrating multi-projector cylindrically curved displays for “wallpaper” projection. In Proceedings of the 5th ACM/IEEE International Workshop on Projector camera systems, Marina del Rey, CA, USA, 10 August 2008; pp. 1–8. [Google Scholar]
  23. Sajadi, B.; Majumder, A. Auto-calibration of cylindrical multi-projector systems. In Proceedings of the 2010 IEEE Virtual Reality Conference (VR), Waltham, MA, USA, 20–24 March 2010; pp. 155–162. [Google Scholar]
  24. Sajadi, B.; Majumder, A. Markerless view-independent registration of multiple distorted projectors on extruded surfaces using an uncalibrated camera. IEEE Trans. Vis. Comput. Gr. 2009, 15, 1307–1316. [Google Scholar] [CrossRef] [PubMed]
  25. Zhao, L.; Weng, D.; Li, D. The auto-geometric correction of multi-projector for cylindrical surface using bézier patches. J. Soc. Inf. Disp. 2014, 22, 473–481. [Google Scholar] [CrossRef]
  26. Chen, B.-S.; Zhong, Q.; Li, H.-F.; Liu, X.; Xu, H.-S. Automatic geometrical calibration for multiprojector-type light field three-dimensional display. Optice 2014, 53, 073107. [Google Scholar] [CrossRef]
  27. Nahon, M.A.; Reid, L.D. Simulator motion-drive algorithms—A designer's perspective. J. Guid. Control Dyn. 1990, 13, 356–362. [Google Scholar] [CrossRef]
  28. Casas, S.; Portalés, C.; Riera, J.V.; Fernández, M. Heuristics for solving the parameter tuning problem in motion cueing algorithms. In Revista Iberoamericana de Automática e Informática industrial; CEA: New Delhi, India, 2017; pp. 193–204. [Google Scholar]
  29. Chueca Pazos, M.; Herráez Boquera, J.; Berné Valero, J.L. Métodos Topográficos; Editorial Paraninfo S.A.: Madrid, Spain, 1996; p. 746. [Google Scholar]
  30. Itseez. Opencv. Available online: http://opencv.org/ (accessed on 2 June 2017).
  31. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
  32. Brown, D.C. Close-range camera calibration. Photogramm. Eng. 1971, 37, 855–866. [Google Scholar]
  33. Dermanis, A. Free network solutions with the direct linear transformation method. ISPRS J. Photogramm. Remote Sens. 1994, 49, 2–12. [Google Scholar] [CrossRef]
  34. Kraus, K. Photogrammetry. In Advanced Methods and Applications; Dümmler/Bonn: Bonn, Germany, 1997; p. 466. [Google Scholar]
Figure 1. View of the exterior of the F1 simulator (left) and the projectors of the inside (right).
Figure 1. View of the exterior of the F1 simulator (left) and the projectors of the inside (right).
Jimaging 03 00019 g001
Figure 2. Data flow of the calibration procedure.
Figure 2. Data flow of the calibration procedure.
Jimaging 03 00019 g002
Figure 3. Top view (left) and side view (right) of the spatial distribution of devices and control points (CPs), and representation of measured distances.
Figure 3. Top view (left) and side view (right) of the spatial distribution of devices and control points (CPs), and representation of measured distances.
Jimaging 03 00019 g003
Figure 4. Coordinate system.
Figure 4. Coordinate system.
Jimaging 03 00019 g004
Figure 5. Spatial positioning of devices and 3D points, where the different coloured dots represent: black: camera projection center; cyan: projector projection center; red: control points at the cylindrical surface; green: checkerboard points at Z = 0; blue: checkerboard points at the cylindrical surface.
Figure 5. Spatial positioning of devices and 3D points, where the different coloured dots represent: black: camera projection center; cyan: projector projection center; red: control points at the cylindrical surface; green: checkerboard points at Z = 0; blue: checkerboard points at the cylindrical surface.
Jimaging 03 00019 g005
Figure 6. Residuals (×50) of the three projector calibration, where: (a) Proj1; (b) Proj2; (c) Proj3.
Figure 6. Residuals (×50) of the three projector calibration, where: (a) Proj1; (b) Proj2; (c) Proj3.
Jimaging 03 00019 g006
Figure 7. Projecting on top of the physical CPs (right) and detail of CP2 (left).
Figure 7. Projecting on top of the physical CPs (right) and detail of CP2 (left).
Jimaging 03 00019 g007
Figure 8. Panoramic views of the resulting wall-paper like projections of a grid (top) and a scene (bottom). No photometric correction is applied in the overlapping areas to highlight the results of the method.
Figure 8. Panoramic views of the resulting wall-paper like projections of a grid (top) and a scene (bottom). No photometric correction is applied in the overlapping areas to highlight the results of the method.
Jimaging 03 00019 g008
Figure 9. A user testing the Formula 1 (F1) virtual simulator.
Figure 9. A user testing the Formula 1 (F1) virtual simulator.
Jimaging 03 00019 g009
Table 1. Spatial coordinates of CPs from geometrical rules. Only the central CPs are depicted as an example.
Table 1. Spatial coordinates of CPs from geometrical rules. Only the central CPs are depicted as an example.
Control PointXYZ
CP2−a/2h−d
CP10a/2 − mh−d − n
CP3a/2h−d
CP6−a/20−d
CP13a/2 − m0−d − n
CP7a/20−d
Table 2. Measured distances and fictitious observations (units in (m)).
Table 2. Measured distances and fictitious observations (units in (m)).
Control PointCP10CP3CP1CP9CP11CP4O
CP20.8351.9202.0451.0182.7453.0401.570
CP10 1.1902.5851.7602.2502.7801.570
CP3 3.0402.6051.2752.0601.570
CP1 1.1702.9952.9201.570
CP9 3.0603.0451.570
CP11 0.9301.570
CP4 1.570
Table 3. Compensated values (units in (m)).
Table 3. Compensated values (units in (m)).
Control PointCP10CP3CP1CP9CP11CP4O
CP20.8511.9052.0320.9922.7613.0901.570
CP10 1.1572.6081.7562.2602.7951.567
CP3 3.0772.5821.3072.0581.566
CP1 1.1533.0662.7971.586
CP9 3.0623.1181.551
CP11 0.8621.552
CP4 1.588
Table 4. Camera interior orientation (units in (pixels)) and distortion coefficients.
Table 4. Camera interior orientation (units in (pixels)) and distortion coefficients.
x0y0ck1k2p1p2k3
1384.811826.2002380.083−0.1603840.124279−0.000252−0.000955−0.015237
Table 5. Camera exterior orientation at the three locations (units of translations in (m), angles in (deg)).
Table 5. Camera exterior orientation at the three locations (units of translations in (m), angles in (deg)).
CameraX0Y0Z0ανκ
Cam10.5541.1920.47680.385473.4311−82.3739
Cam2−0.6711.0171.406−69.72069.380669.8575
Cam3−0.6341.176−0.405−84.001677.274283.9486
Table 6. Projector interior orientation (units in (pixels) for x0, y0 and c).
Table 6. Projector interior orientation (units in (pixels) for x0, y0 and c).
Projectorx0y0cmd
Proj1873.064−56.9641344.290−0.999870.00260
Proj2738.275−53.3011518.870−0.99997−0.00677
Proj3731.679−46.1861247.710−0.99998−0.00352
Table 7. Exterior orientation of projectors (units of translations in (m), angles in (deg)).
Table 7. Exterior orientation of projectors (units of translations in (m), angles in (deg)).
ProjectorX0Y0Z0ανκ
Proj10.3891.4310.21588.585276.6541−85.5039
Proj2−0.2831.4570.75267.49132.8702−67.6160
Proj3−0.4381.397−0.156−89.189970.676484.9487
Table 8. Discrepancies between the ideal (Table 6) and the simulated projector interior orientation with approximate 3D coordinates (units in (pixels) for x0, y0 and c).
Table 8. Discrepancies between the ideal (Table 6) and the simulated projector interior orientation with approximate 3D coordinates (units in (pixels) for x0, y0 and c).
Projectordx0dy0dcdmdd
Proj1−45,6509138−111,6800.00011−0.00174
Proj2−150,398−495013,900−0.00001−0.00111
Proj3−218,02413,286−111,9700.00000−0.00463
Table 9. Discrepancies between the ideal (Table 7) and the simulated exterior orientation of projectors with approximate 3D coordinates (units of translations in (m), angles in (deg)).
Table 9. Discrepancies between the ideal (Table 7) and the simulated exterior orientation of projectors with approximate 3D coordinates (units of translations in (m), angles in (deg)).
ProjectordX0dY0dZ0
Proj1−0.151−0.003−0.0340.173641,51722,518
Proj2−0.1510.0100.019115,6010.7653−114,308
Proj30.052−0.006−0.249−0.0710−53,60693,404

Share and Cite

MDPI and ACS Style

Portalés, C.; Casas, S.; Coma, I.; Fernández, M. A Multi-Projector Calibration Method for Virtual Reality Simulators with Analytically Defined Screens. J. Imaging 2017, 3, 19. https://doi.org/10.3390/jimaging3020019

AMA Style

Portalés C, Casas S, Coma I, Fernández M. A Multi-Projector Calibration Method for Virtual Reality Simulators with Analytically Defined Screens. Journal of Imaging. 2017; 3(2):19. https://doi.org/10.3390/jimaging3020019

Chicago/Turabian Style

Portalés, Cristina, Sergio Casas, Inmaculada Coma, and Marcos Fernández. 2017. "A Multi-Projector Calibration Method for Virtual Reality Simulators with Analytically Defined Screens" Journal of Imaging 3, no. 2: 19. https://doi.org/10.3390/jimaging3020019

APA Style

Portalés, C., Casas, S., Coma, I., & Fernández, M. (2017). A Multi-Projector Calibration Method for Virtual Reality Simulators with Analytically Defined Screens. Journal of Imaging, 3(2), 19. https://doi.org/10.3390/jimaging3020019

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop