Abstract
Video contains a large number of motion information. The video– particularly video with moving camera – is segmented based on the relative motion occurring between moving targets and background. By using fusion ability of pulse coupled neural network (PCNN), the target regions and the background regions are fused respectively. Firstly using PCNN fuses the direction of the optical flow fusing, and extracts moving targets from video especially with moving camera. Meanwhile, using phase spectrums of topological property and color pairs (red/green, blue/yellow) generates attention information. Secondly, our video attention map is obtained by means of linear fusing the above features (direction fusion, phase spectrums and magnitude of velocity), which adds weight for each information channel. Experimental results shows that proposed method has better target tracking ability compared with three other methods– Frequency-tuned salient region detection (FT) [5], visual background extractor (Vibe) [6] and phase spectrum of quaternion Fourier transform (PQFT) [1].
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Guo, C.L., Ma, Q., Zhang, L.M.: Spatio-temporal saliency detection using phase spectrum of quaternion Fourier transform. In: IEEE Conference on Computer Vision and Pattern Recognition, pp, 1–8(2008)
Eckhorn, R., Reitboeck, H.J., Arndt, M., et al.: Feather linking via synchronization among distributed assemblies: simulation of results from cat cortex. Neural Comput. 2(3), 293–307 (1990)
Gu, X.D., Yu, D.H., Zhang, L.M.: Image shadow removal using pulse coupled neural network. IEEE Trans. Neural Networks 5, 692–698 (2005)
Gu, X.D., Fang, Y., Wang, Y.Y.: Attention selection using global topological properties based on pulse coupled neural network. Comput. Vis. Image Underst. 117, 1400–1411 (2013)
Achanta, R., Hemami, S., Estrada, F., Susstrunk, S.: Frequency-tuned salient region detection. In: IEEE CVPR, pp. 1597–1604 (2009)
Barnich, Olivier, Van Droogenbroeck, Marc: Vibe: a universal background subtraction algorithm for video sequences. IEEE Trans. Image Process. 20(6), 1709–1724 (2011)
Chen, L.: Topological structure in visual perception. Science 218, 699–700 (1982)
Cisco VNI, Cisco visual networking index: forecast and methodology, 2013–2018 [EB/OL]. http://www.cisco.com/c/en/us/solutions/collateral/service-provider/ip-ngn-ip-next-generation-network/white_paper_c11-481360.html. 10−14 June 2014
Kim, W., Kim, C.: Spatiotemporal saliency detection using textural contrast and its applications. IEEE Trans. Circuits Syst. Video Technol. 24, 646–659 (2014)
Horn, B., Schunch, B.: Detemining optical flow. Artif. Intell. 17, 185–203 (1981)
Acknowledgements
This work was supported in part by National Natural Science Foundation of China under grant 61371148.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
Ni, Q., Wang, J., Gu, X. (2015). Moving Target Tracking Based on Pulse Coupled Neural Network and Optical Flow. In: Arik, S., Huang, T., Lai, W., Liu, Q. (eds) Neural Information Processing. ICONIP 2015. Lecture Notes in Computer Science(), vol 9491. Springer, Cham. https://doi.org/10.1007/978-3-319-26555-1_3
Download citation
DOI: https://doi.org/10.1007/978-3-319-26555-1_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-26554-4
Online ISBN: 978-3-319-26555-1
eBook Packages: Computer ScienceComputer Science (R0)