Abstract
The goal of research in computer vision is to impart and improvise the visual intelligence in a machine i.e. to facilitate a machine to see, perceive, and respond in human-like fashion(though with reduced complexity) using multitudinal sensors and actuators. The major challenge in dealing with these kinds of machines is in making them perceive and learn from huge amount of visual information received through their sensors. Mimicking human like visual perception is an area of research that grabs attention of many researchers. To achieve this complex task of visual perception and learning, Visual Attention model is developed. A visual attention model enables the robot to selectively (and autonomously) choose a “behaviourally relevant” segment of visual information for further processing while relative exclusion of others (Visual Attention for Robotic Cognition: A Survey, March 2011).The aim of this paper is to suggest an improvised visual attention model with reduced complexity while determining the potential region of interest in a scenario.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Amudha, J., Soman, K.P., Kiran, Y.: Feature Selection in Top-Down Visual Attention Model using WEKA. International Journal of Computer Applications (0975 – 8887) 24(4), 38–43 (2011)
Triesman, A.M., Gelade, G.: A feature integration theory of attention. Cogn. Psychol. 12, 97–136 (1980)
Koch, C., Ullman, S.: Shifts in selective visual attention: Toward the underlying neural circuitry. Human Neurobiol. 4, 219–227 (1985)
Logan, G.D.: The CODE theory of visual attention: An integration of space-based and object-based attention. Psychol. Rev. 103, 603–649 (1996)
Wolfe, J.M., Cave, K., Franzel, S.: Guided search:An alternative to the feature integration model for visual search. J. Exp. Pyschol.: Human Percept. Perform. 15, 419–433 (1989)
Itti, L., Koch, C.: A saliency based search mechanism for overt and covert shift of visual attention. Vis. Res. 40, 1489–1506 (2000)
Itti, L., Koch, C., Niebur, E.: A model of saliency based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 20(11), 1254–1259 (1998)
Posner, M.I., Cohen, Y.: Components of visual orienting, pp. 531–556. Erlbaum, Hillsdale (1984)
Momotazbegam, FakhiriKarray: Visual Attention for Robotic Cognition: A Survey. IEEE Transactions on Autonomous Mental Development 3(1) (March 2011)
Frintrop, S.: VOCUS: A Visual Attention System for Object Detection and Goal-Directed Search. LNCS (LNAI), vol. 3899. Springer, Heidelberg (2006)
Frintrop, S., Rome, E., Christension, H.I.: Computatinal visual attention sys-tem and their cognitive foundation: A survey. ACM Journal Name 7(1), 1 (2010)
Navalpakkamand, V., Itti, L.: Top down attention selection is fine grained. J. Vis. 6, 1180–1193 (2006)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Amudha, J., Chadalawada, R.K., Subashini, V., Barath Kumar, B. (2013). Optimised Computational Visual Attention Model for Robotic Cognition. In: Abraham, A., Thampi, S. (eds) Intelligent Informatics. Advances in Intelligent Systems and Computing, vol 182. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-32063-7_27
Download citation
DOI: https://doi.org/10.1007/978-3-642-32063-7_27
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-32062-0
Online ISBN: 978-3-642-32063-7
eBook Packages: EngineeringEngineering (R0)