Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications
<p>(<b>a</b>) Schema of the configuration of the stereo rig in relation to the ground; (<b>b</b>) schema of the yaw deviation (<span class="html-italic">ϕ</span>).</p> "> Figure 2
<p>Calibration of the deviation of the yaw angle in an urban environment: superposition of two consecutive images, where the consecutive positions of each point of interest are connected by green lines, while the intersections of the straight lines appear as red circles.</p> "> Figure 3
<p>Examples of uv-disparity for different scenarios of interest in urban environments, where the left visible image, on the bottom the corresponding u-disparity, and right, the v-disparity and the v-disparity-free with the detection of the road profile (dashed red line), are shown.</p> "> Figure 4
<p>Evolution of estimated extrinsic parameters (<span class="html-italic">h</span>,<span class="html-italic">θ</span>) along the synthetic sequence [<a href="#B10-sensors-16-01492" class="html-bibr">10</a>] using the v-disparity (blue) and the v-disparity-free (red), respectively.</p> "> Figure 5
<p>Result of the estimated pitch angle when the vehicle passes through a speed bump.</p> "> Figure 6
<p>Assessment of the impact that the roll angle (<span class="html-italic">ρ</span>) has on the v-disparity and on the road profile. (<b>a</b>) Visible image of a highway traffic scenario; (<b>b</b>) v-disparity; (<b>c</b>) v-disparity after adding a 9° roll angle; (<b>d</b>) v-disparity after adding an 18° roll angle.</p> "> Figure 7
<p>Schema of the different stages of the self-calibration method of the extrinsic parameters (<math display="inline"> <semantics> <mrow> <mi>h</mi> <mo>,</mo> <mi>θ</mi> <mo>,</mo> <mi>ρ</mi> </mrow> </semantics> </math>) when the roll angle (<span class="html-italic">ρ</span>) has non-negligible values. (<b>a</b>) Schema of the first stage of the method, which shows the straight lines (lines in red) that join the points (points in red) belonging to the road in the free map for each level of disparity; (<b>b</b>) schema of the second stage, which depicts how to fit the linear equation (dashed line in red) resulting from the point cloud data <math display="inline"> <semantics> <msub> <mrow> <mo>{</mo> <mi>d</mi> <mrow> <mo>(</mo> <mi>δ</mi> <mo>)</mo> </mrow> <mo>}</mo> </mrow> <mi>n</mi> </msub> </semantics> </math> (points in blue).</p> "> Figure 8
<p>Example of the partial results of the self-calibration Method 2 for two different values of roll angle. (<b>a</b>) Visible image of a synthetic traffic scenario [<a href="#B10-sensors-16-01492" class="html-bibr">10</a>] in an urban environment with a value of roll angle equal to 0°; (<b>b</b>) visible image of a synthetic traffic scenario in an urban environment with a value of roll angle equal to 20°; (<b>c</b>) point clouds obtained from each value of roll angle with their respective straight line.</p> "> Figure 9
<p>Self-calibration results of the roll angle (<span class="html-italic">ρ</span>) and the pitch angle (<span class="html-italic">θ</span>). The estimated values appear in blue and the ground truth in red. Results by using Method 2 (<b>a</b>,<b>b</b>). Results by using Method 1 (<b>c</b>).</p> "> Figure 10
<p>Result of the self-calibration of the height (<span class="html-italic">h</span>) for non-negligible values of the roll angle. Estimated values appear in blue and the ground truth in red.</p> "> Figure 11
<p>(<b>a</b>) Original disparity map (equalized); (<b>b</b>) virtual disparity map (equalized).</p> "> Figure 12
<p>Comparison between the self-calibration results obtained by using the presented Method 2 and the method based on the virtual disparity map presented in [<a href="#B11-sensors-16-01492" class="html-bibr">11</a>].</p> "> Figure 13
<p>Self-calibration results of the extrinsic parameters (<math display="inline"> <semantics> <mrow> <mi>ρ</mi> <mo>,</mo> <mi>θ</mi> <mo>,</mo> <mi>h</mi> </mrow> </semantics> </math>) in a real traffic environment (Example 1).</p> "> Figure 13 Cont.
<p>Self-calibration results of the extrinsic parameters (<math display="inline"> <semantics> <mrow> <mi>ρ</mi> <mo>,</mo> <mi>θ</mi> <mo>,</mo> <mi>h</mi> </mrow> </semantics> </math>) in a real traffic environment (Example 1).</p> "> Figure 14
<p>Self-calibration results of the extrinsic parameters (<math display="inline"> <semantics> <mrow> <mi>ρ</mi> <mo>,</mo> <mi>θ</mi> <mo>,</mo> <mi>h</mi> </mrow> </semantics> </math>) in a real traffic environment (Example 2).</p> "> Figure 14 Cont.
<p>Self-calibration results of the extrinsic parameters (<math display="inline"> <semantics> <mrow> <mi>ρ</mi> <mo>,</mo> <mi>θ</mi> <mo>,</mo> <mi>h</mi> </mrow> </semantics> </math>) in a real traffic environment (Example 2).</p> ">
Abstract
:1. Introduction
2. State of the Art
- Calibration patterns: In this first group of calibration methods, patterns are taking into account for determining the extrinsic parameters of the vision system. These methods are based on minimizing the projection error of a number of known points located around the vehicle, which are joined in order to create the pattern. These patterns may be located on the ground [15,16] or painted on the hood of the vehicle [17].
- Road marks: Secondly, the calibration process is performed by means of road marks [18], such as lines [19,20,21] or dashed lines on the roadway [22], it being possible to use the parking lines as the calibration pattern [23]. These methods allow the calibration process, where it is possible to recalculate the extrinsic parameters at different times and positions. The inconvenience of the road marks is related to the impossibility to be constantly detected, for example in urban environments, where road marks can be found in poor conservation or occluded by other elements, such as parked vehicles, but above all, the fact that there are few road marks within cities.
- The geometry of the road in front of the vehicle: The last group of methods is based on estimating the geometry of the roadway in front of the vehicle, which can be accomplished mainly in two different ways. Firstly, the three-dimensional information of the vehicle environment contained in the disparity map allows one to determine the position of the ground in front of the vehicle by means of different kinds of projections. While a second technique is based on the sampling of 3D points and subsequent adjustment to a plane [24], where both techniques can be used in outdoor applications [25,26]. Such methods allow one to find out the extrinsic parameters, avoiding the need for the calibration pattern or the road marks. Moreover, this allows recalculating the relative position of the vision system in real time while the vehicle is moving and adapting to changing parameters, as discussed above, such as vehicle load, acceleration or irregularities of the roadway.
3. Extrinsic Parameter Self-Calibration
3.1. System Configuration
3.2. Yaw Calibration (ϕ)
3.3. Self-Calibration of the Height (h) and the Pitch (θ) and Roll (ρ) Angles
3.3.1. Self-Calibration for Negligible Values of the Roll Angle (Method 1)
3.3.2. Self-Calibration for Non-Negligible Values of the Roll Angle (Method 2)
- Firstly, all pixels of the free map for each possible level of disparity () are gathered together in pairs of points. A linear equation is obtained by using each pair of points. All of these linear equations fulfil the expression (12) (see Figure 7a), and therefore, it is possible to achieve a pair from the slope and the y-intercept of each linear equation .
- Once the first stage has been completed for every pixel of the free map, a solution set () has been gathered together both for the slope (c) and the y-intercept () of linear Equation (12). The solution set (), in turn, takes the form of a point cloud, which is possible to fit to a linear equation that fulfills the expression (18), obtaining the values both of () and of () (see Figure 7b). The value of the pitch angle (θ) is estimated directly from () by means of Equation (19).
- The roll angle (ρ) is thereupon estimated by means of the solution set () of the slope (c) (see Equation (13)), where the optimum solution can be achieved by using RANSAC [38]. It is possible to estimate the roll angle (ρ) by using (20) due to the value of the pitch angle (θ) being calculated as a result of the second stage. Finally, the remaining extrinsic parameter h (height) may be estimated from the value of (see Equation (18)) and the pitch (θ) and roll (ρ) angles by means of Equation (21).
4. Results and Discussion
4.1. Assessment of the Method
4.2. Comparison with Methods of the State of the Art
4.3. Experimental Results
5. Conclusions
Acknowledgments
Author Contributions
Conflicts of Interest
References
- WHO. The Top 10 Causes of Death. Available online: http://www.who.int/mediacentre/factsheets/fs310/en/ (accessed on 7 May 2016).
- Commission, E. Road Safety In the European Union. Trends, Statistics and Main Challenges. Available online: http://ec.europa.eu/transport/road_safety/pdf/vademecum_2015.pdf (accessed on 7 May 2016).
- Jiménez, F.; Naranjo, J.E.; Gómez, Ó. Autonomous manoeuvring systems for collision avoidance on single carriageway roads. Sensors 2012, 12, 16498–16521. [Google Scholar] [CrossRef] [PubMed]
- Du, M.; Mei, T.; Liang, H.; Chen, J.; Huang, R.; Zhao, P. Drivers’ visual behavior-guided RRT motion planner for autonomous on-road driving. Sensors 2016, 16, 102. [Google Scholar] [CrossRef] [PubMed]
- Lee, B.H.; Song, J.H.; Im, J.H.; Im, S.H.; Heo, M.B.; Jee, G.I. GPS/DR error estimation for autonomous vehicle localization. Sensors 2015, 15, 20779–20798. [Google Scholar] [CrossRef] [PubMed]
- Shinar, D. Psychology on the Road. The Human Factor in Traffic Safety; John Wiley & Sons: Hoboken, NJ, USA, 1978. [Google Scholar]
- Martín, D.; García, F.; Musleh, B.; Olmeda, D.; Peláez, G.; Marín, P.; Ponz, A.; Rodríguez, C.; Al-Kaff, A.; de la Escalera, A.; et al. IVVI 2.0: An intelligent vehicle based on computational perception. Expert Syst. Appl. 2014, 41, 7927–7944. [Google Scholar] [CrossRef]
- Musleh, B.; Martin, D.; Armingol, J.M.; de la Escalera, A. Continuous pose estimation for stereo vision based on UV disparity applied to visual odometry in urban environments. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation, Hong Kong, China, 31 May–7 Jun 2014; pp. 3983–3988.
- Musleh, B.; de la Escalera, A.; Armingol, J.M. UV disparity analysis in urban environments. In Computer Aided Systems Theory–EUROCAST 2011; Springer: Las Palmas de Gran Canaria, Spain, 2012; pp. 426–432. [Google Scholar]
- Van Der Mark, W.; Gavrila, D.M. Real-time dense stereo for intelligent vehicles. IEEE Trans. Intell. Transp. Syst. 2006, 7, 38–50. [Google Scholar] [CrossRef]
- Suganuma, N.; Fujiwara, N. An obstacle extraction method using virtual disparity image. In Proceedings of the 2007 IEEE Intelligent Vehicles Symposium, Canberra, Australia, 13–15 June 2007; pp. 456–461.
- Onkarappa, N.; Sappa, A.D. On-board monocular vision system pose estimation through a dense optical flow. In Image Analysis and Recognition; Springer: Póvoa de Varzim, Portugal, 2010; pp. 230–239. [Google Scholar]
- Schlipsing, M.; Salmen, J.; Lattke, B.; Schroter, K.; Winner, H. Roll angle estimation for motorcycles: Comparing video and inertial sensor approaches. In Proceedings of the 2012 IEEE Intelligent Vehicles Symposium (IV), Madrid, Spain, 3–7 June 2012; pp. 500–505.
- Sappa, A.D.; Dornaika, F.; Ponsa, D.; Gerónimo, D.; López, A. An efficient approach to onboard stereo vision system pose estimation. IEEE Trans. Intell. Transp. Syst. 2008, 9, 476–490. [Google Scholar] [CrossRef]
- Marita, T.; Oniga, F.; Nedevschi, S.; Graf, T.; Schmidt, R. Camera calibration method for far range stereovision sensors used in vehicles. In Proceedings of the 2006 IEEE Intelligent Vehicles Symposium, Tokyo, Japan, 13–15 June 2006; pp. 356–363.
- Hold, S.; Nunn, C.; Kummert, A.; Muller-Schneiders, S. Efficient and robust extrinsic camera calibration procedure for lane departure warning. In Proceedings of the 2009 IEEE Intelligent Vehicles Symposium, Xi’an, China, 3–5 June 2009; pp. 382–387.
- Broggi, A.; Bertozzi, M.; Fascioli, A. Self-calibration of a stereo vision system for automotive applications. In Proceedings of the IEEE International Conference on Robotics and Automation, Seoul, Korea, 21–26 May 2001; Volume 4, pp. 3698–3703.
- Hold, S.; Gormer, S.; Kummert, A.; Meuter, M.; Muller-Schneiders, S. A novel approach for the online initial calibration of extrinsic parameters for a car-mounted camera. In Proceedings of the 12th International IEEE Conference on Intelligent Transportation Systems, St. Louis, MO, USA, 4–7 October 2009; pp. 1–6.
- Coulombeau, P.; Laurgeau, C. Vehicle yaw, pitch, roll and 3D lane shape recovery by vision. In Proceedings of the IEEE Intelligent Vehicle Symposium, Versailles, France, 17–21 June 2002; Volume 2, pp. 619–625.
- Collado, J.; Hilario, C.; de la Escalera, A.; Armingol, J. Self-calibration of an on-board stereo-vision system for driver assistance systems. In Proceedings of the 2006 IEEE Intelligent Vehicles Symposium, Tokyo, Japan, 13–15 June 2006; pp. 156–162.
- Nedevschi, S.; Vancea, C.; Marita, T.; Graf, T. Online extrinsic parameters calibration for stereovision systems used in far-range detection vehicle applications. IEEE Trans. Intell.Transp. Syst. 2007, 8, 651–660. [Google Scholar] [CrossRef]
- De Paula, M.; Jung, C.; da Silveira, L.G., Jr. Automatic on-the-fly extrinsic camera calibration of onboard vehicular cameras. Expert Syst. Appl. 2014, 41, 1997–2007. [Google Scholar] [CrossRef]
- Li, S.; Hai, Y. Easy calibration of a blind-spot-free fisheye camera system using a scene of a parking space. IEEE Trans. Intell. Transp. Syst. 2011, 12, 232–242. [Google Scholar] [CrossRef]
- Cech, M.; Niem, W.; Abraham, S.; Stiller, C. Dynamic ego-pose estimation for driver assistance in urban environments. In Proceedings of the 2004 IEEE Intelligent Vehicles Symposium, Parma, Italy, 14–17 June 2004; pp. 43–48.
- Teoh, C.; Tan, C.; Tan, Y.C. Ground plane detection for autonomous vehicle in rainforest terrain. In Proceedings of the 2010 IEEE Conference on Sustainable Utilization and Development in Engineering and Technology (STUDENT), Kuala Lumpur, Malaysia, 20–21 November 2010; pp. 7–12.
- Wang, Q.; Zhang, Q.; Rovira-Mas, F. Auto-calibration method to determine camera pose for stereovision-based off-road vehicle navigation. Environ. Control Biol. 2010, 48, 59–72. [Google Scholar] [CrossRef]
- Labayrade, R.; Aubert, D.; Tarel, J. Real time obstacle detection in stereovision on non flat road geometry through v-disparity representation. In Proceedings of the 2002 IEEE Intelligent Vehicle Symposium, Versailles, France, 17–21 June 2002; Volome 2, pp. 646–651.
- Labayrade, R.; Aubert, D. A single framework for vehicle roll, pitch, yaw estimation and obstacles detection by stereovision. In Proceedings of the 2003 IEEE Intelligent Vehicles Symposium, Columbus, OH, USA, 9–11 June 2003; pp. 31–36.
- Suganuma, N.; Shimoyama, M.; Fujiwara, N. Obstacle detection using virtual disparity image for non-flat road. In Proceedings of the 2008 IEEE Intelligent Vehicles Symposium, Eindhoven, The Netherlands, 4–6 June 2008; pp. 596–601.
- Sappa, A.; Gerónimo, D.; Dornaika, F.; López, A. On-board camera extrinsic parameter estimation. Electron. Lett. 2006, 42, 745–747. [Google Scholar] [CrossRef]
- Llorca, D.F.; Sotelo, M.; Parra, I.; Naranjo, J.E.; Gavilán, M.; Álvarez, S. An experimental study on pitch compensation in pedestrian-protection systems for collision avoidance and mitigation. IEEE Trans. Intell. Transp. Syst. 2009, 10, 469–474. [Google Scholar] [CrossRef] [Green Version]
- Sappa, A.D.; Herrero, R.; Dornaika, F.; Gerónimo, D.; López, A. Road approximation in euclidean and v-disparity space: A comparative study. In Computer Aided Systems Theory–EUROCAST 2007; Springer: Las Palmas de Gran Canaria, Spain, 2007; pp. 1105–1112. [Google Scholar]
- Seki, A.; Okutomi, M. Robust obstacle detection in general road environment based on road extraction and pose estimation. Electron. Commun. Jpn. 2007, 90, 12–22. [Google Scholar] [CrossRef]
- Dornaika, F.; Sappa, A.D. Real time on board stereo camera pose through image registration. In Proceedings of the 2008 IEEE Intelligent Vehicles Symposium, Eindhoven, The Netherlands, 4–6 June 2008; pp. 804–809.
- Dornaika, F.; Alvarez, J.; Sappa, A.D.; López, A.M. A new framework for stereo sensor pose through road segmentation and registration. IEEE Trans. Intell. Transp. Syst. 2011, 12, 954–966. [Google Scholar] [CrossRef]
- Dornaika, F.; Sappa, A.D. A featureless and stochastic approach to on-board stereo vision system pose. Image Vis. Comput. 2009, 27, 1382–1393. [Google Scholar] [CrossRef]
- Fusiello, A.; Trucco, E.; Verri, A. A compact algorithm for rectification of stereo pairs. Mach. Vis. Appl. 2000, 12, 16–22. [Google Scholar] [CrossRef]
- Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
- Broggi, A.; Caraffi, C.; Fedriga, R.I.; Grisleri, P. Obstacle detection with stereo vision for off-road vehicle navigation. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops, San Diego, CA, USA, 20–25 June 2005; pp. 65–65.
- Zhao, J.; Katupitiya, J.; Ward, J. Global correlation based ground plane estimation using v-disparity image. In Proceedings of the 2007 IEEE International Conference on Robotics and Automation, Roma, Italy, 10–14 April 2007; pp. 529–534.
- Lee, C.H.; Lim, Y.C.; Kwon, S.; Lee, J.H. Obstacle localization with a binarized v-disparity map using local maximum frequency values in stereo vision. In Proceedings of the 2nd International Conference on Signals, Circuits and Systems, Nabeul, Tunisia, 7–9 November 2008; pp. 1–4.
% Points Used | 50% | 25% | 10% | 5% | 1% |
---|---|---|---|---|---|
Pitch angle average error (°) | 0.1856 | 0.1751 | 0.1985 | 0.2174 | 0.2939 |
Roll angle average error (°) | 0.3361 | 0.3598 | 0.3791 | 0.3771 | 0.3894 |
Computing time reduction (%) | 58.0 | 72.9 | 78.16 | 79.56 | 80.49 |
Method 2 | Roll Angle (ρ) | Pitch Angle (θ) | Height (h) |
---|---|---|---|
Mean | 0.38° | 0.20° | 0.012 (m) |
Virtual disparity map | Roll Angle () | Pitch Angle () | Height (h) |
Mean | 0.36° | 0.27° | 0.018 (m) |
Virtual free map | Roll Angle () | Pitch Angle () | Height (h) |
Mean | 0.33° | 0.28° | 0.017 (m) |
© 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Musleh, B.; Martín, D.; Armingol, J.M.; De la Escalera, A. Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications. Sensors 2016, 16, 1492. https://doi.org/10.3390/s16091492
Musleh B, Martín D, Armingol JM, De la Escalera A. Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications. Sensors. 2016; 16(9):1492. https://doi.org/10.3390/s16091492
Chicago/Turabian StyleMusleh, Basam, David Martín, José María Armingol, and Arturo De la Escalera. 2016. "Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications" Sensors 16, no. 9: 1492. https://doi.org/10.3390/s16091492
APA StyleMusleh, B., Martín, D., Armingol, J. M., & De la Escalera, A. (2016). Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications. Sensors, 16(9), 1492. https://doi.org/10.3390/s16091492