Abstract
Autonomous driving is an extremely challenging problem and existing driverless cars use non-visual sensing to palliate the limitations of machine vision approaches. This paper presents a driving school framework for learning incrementally a fast and robust steering behaviour from visual gist only. The framework is based on an autonomous steering program interfacing in real time with a racing simulator: hence the teacher is a racing program having perfect insight into its position on the road, whereas the student learns to steer from visual gist only. Experiments show that (i) such a framework allows the visual driver to drive around the track successfully after a few iterations, demonstrating that visual gist is sufficient input to drive the car successfully; and (ii) the number of training rounds required to drive around a track reduces when the student has experienced other tracks, showing that the learnt model generalises well to unseen tracks.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Amit Y, Geman D (1997) Shape quantization and recognition with randomized trees. Neural Computation 9:1545–1588
Breiman L (2001) Random forests. Mach Learn 45(1):5–32
Dalal N, Triggs B (2005) Histograms of oriented gradients for human detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Kalal Z, Mikolajczyk K, Matas J (2010) Tracking-learning-detection. IEEE Trans Pattern Anal Mach Intell 6(1)
Markelic I, Kulvicius T, Tamosiunaite M, Wörgötter F (2008) Anticipatory driving for a robot-car based on supervised learning. In: ABiALS, pp 267–282
Oliva A, Torralba A (2001) Modeling the shape of the scene: a holistic representation of the spatial envelope. Int J Comput Vis 42(3):145–175
Pomerleau D (1989) Alvinn: An autonomous land vehicle in a neural network. In: Proceedings of NIPS
Pugeault N, Bowden R (2010) Learning pre-attentive driving behaviour from holistic visual features. In: Proceedings of the European Conference on Computer Vision (ECCV’2010), Part VI, LNCS 6316, pp 154–167
Pugeault N, Bowden R (2011) Driving me around the bend: learning to drive from visual gist. In: 1st IEEE Workshop on Challenges and Opportunities in Robotic Perception, in conjunction with ICCV’2011
Siagian C, Itti L (2009) Biologically inspired mobile robot vision localization. IEEE Trans Robot 25(4):861–873
Thrun S, Montemerlo M, Dahlkamp H, Stavens D, Aron A, Diebel J, Fong P, Gale J, Halpenny M, Hoffmann G, Lau K, Oakley C, Palatucci M, Pratt V, Stang P, Strohband S, Dupont C, Jendrossek LE, Koelen C, Markey C, Rummel C, van Niekerk J, Jensen E, Alessandrini P, Bradski G, Davies B, Ettinger S, Kaehler A, Nefian A, Mahoney P (2006) Stanley: the robot that won the DARPA Grand Challenge. J Robot Syst 23(9):661–692
Turk M, Morgenthaler D, Gremban K, Marra M (1988) VITS–a vision system for autonomous land vehicle navigation. IEEE Trans Pattern Anal Mach Intell 10(3):342–361
Wymann B, Espié E, Guionneau C, Dimitrakakis C, Coulom R, Sumner A (2013) TORCS: The open racing car simulator, v1.3.5
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Rudzits, R., Pugeault, N. Efficient Learning of Pre-attentive Steering in a Driving School Framework. Künstl Intell 29, 51–57 (2015). https://doi.org/10.1007/s13218-014-0340-1
Published:
Issue Date:
DOI: https://doi.org/10.1007/s13218-014-0340-1