Abstract
Scene understanding is the task of giving each pixels in an image a label, which is the class of the pixel belongs to. Traditional scene understanding is object-based approach, which has lots of limitations as the descriptors cannot give the whole characteristics. In this paper, a convolutional neural network based method is proposed to extract the internal features of the whole image, then a softmax regression classifier is applied to generate the label. Scene understanding used in self-navigating vehicles only concentrate on the road, so the number of classes is reduced in order to get higher accuracy by lower computational cost. A pre-processing is implemented on Stanford Background Dataset to obtain three-label images including road, building, and others. As a result, the system yields high accuracy on the three-label dataset with great speed.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Farabet, C., Couprie, C., Najman, L., LeCun, Y.: Scene Parsing with Multiscale Feature Learning, Purity Trees, and Optimal Covers. arXiv preprint arXiv: 1202.2160 (2012)
Espinace, P., Kollar, T., Soto, A., Roy, N.: Indoor scene recognition through object detection. In: 2010 IEEE International Conference on Robotics and Automation (ICRA), pp. 1406–1413. IEEE Press (2010)
Socher, R., Huval, B., Bath, B., Manning, C.D., Hg, A.: Convolutional-recursive deep learning for 3D object classification. In: Proceedings of Advances in Neural Information Processing Systems (NIPS), pp. 665–673 (2012)
Espinace, P., Kollar, T., Roy, N., Soto, A.: Indoor Scene Recognition by a Mobile Robot through Adaptive Object Detection. Robotics and Autonomous Systems 61(9), 932–947 (2013)
Kavukcuoglu, K., Sermanet, P., Boureau, Y., Gregor, K., Mathieu, M., LeCun, Y.: Learning convolutional feature hierarchies for visual recognition. In: Proceedings of Advances in Neural Information Processing Systems (NIPS), pp. 1090–1098 (2010)
Vasudevan, S., Gachter, S., Nguyen, V., Siegwart, R.: Cognitive Maps for Mobile Robot–An Object Based Approach. Robotics and Autonomous Systems 55(5), 359–371 (2007)
Ranganathan, A.: PLISS: detecting and labeling places using online change-point detection. In: Proceedings of Robotics: Science and Systems (2010)
Leutenegger, S., Chli, M., Siegwart, R.: BRISK: binary robust invariant scalable keypoints. In: 2011 IEEE International Conference on Computer Vision (ICCV), pp. 2548–2555. IEEE Press (2011)
Lamon, P., Tapus, A., Glauser, E., Tomatis, N., Siegwart, R.: Environmental modeling with fingerprint sequences for topological global localization. In: 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), vol. 4, pp. 3781–3786. IEEE Press (2003)
Tapus, A., Siegwart, R.: Incremental robot mapping with fingerprints of places. In: 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2429–2434. IEEE Press (2005)
Liu, M., Scaramuzza, D., Pradalier, C., Siegwart, R., Chen, Q.: Scene recognition with omnidirectional vision for topological map using lightweight adaptive descriptor. In: 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 116–121. IEEE Press (2009)
Liu, M., Siegwart, R.: DP-FACT: towards topological mapping and scene recognition with color for omnidirectional camera. In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA). IEEE Press (2012)
Lai, K., Bo, L., Fox, D.: Unsupervised feature learning for 3D scene labeling. In: 2014 IEEE International Conference on Robotics and Automation (ICRA). IEEE Press (2014)
Hinton, G.E., Salakhutdinov, R.R.: Reducing the Dimensionality of Data with Neural Networks. Science 313(5786), 504–507 (2006)
Bengio, Y., Lamblin, P., Popovici, D., Larochelle, H.: Greedy layer-wise training of deep networks. In: Proceedings of Advances in Neural Information Processing Systems (NIPS), pp. 153–160 (2007)
Ranzato, M., Poultney, C., Chopra, S., LeCun, Y.: Efficient learning of sparse representations with an energy-based model. In: Proceedings of Advances in Neural Information Processing Systems (NIPS), pp. 1137–1144 (2007)
Lee, H., Ekanadham, C., Ng, A.: Sparse deep belief net model for visual area V2. In: Proceedings of Advances in Neural Information Processing Systems (NIPS), pp. 873–880 (2008)
LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-Based Learning Applied to Document Recognition. Proceedings of the IEEE 86(11), 2278–2324 (1998)
Salakhutdinov, R.R., Hinton, G.E.: Deep Boltzmann machines. In: Proceedings of International Conference on Artificial Intelligence and Statistics, pp. 448–455 (2009)
Masci, J., Giusti, A., Ciresan, D., Fricout, G., Schmidhuber, J.: A fast learning algorithm for image segmentation with max-pooling convolutional networks. In: Proceedings of the International Conference on Image Processing (ICIP), pp. 2713–2717 (2013)
Pinheiro, P., Collobert, R.: Recurrent convolutional neural networks for scene labeling. In: Proceedings of the 31st International Conference on Machine Learning (ICML), pp. 82–90 (2014)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
Wang, Y., Chen, Q. (2015). Three-Label Outdoor Scene Understanding Based on Convolutional Neural Networks. In: Liu, H., Kubota, N., Zhu, X., Dillmann, R., Zhou, D. (eds) Intelligent Robotics and Applications. ICIRA 2015. Lecture Notes in Computer Science(), vol 9244. Springer, Cham. https://doi.org/10.1007/978-3-319-22879-2_41
Download citation
DOI: https://doi.org/10.1007/978-3-319-22879-2_41
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-22878-5
Online ISBN: 978-3-319-22879-2
eBook Packages: Computer ScienceComputer Science (R0)