Abstract
In many forensic and data analytics applications there is a need to detect whether and for how long a specific person is present in a video. Frames in which the person cannot be recognized by state of the art engines are of particular importance. We describe a new framework for detection and persistence analysis in noisy and cluttered videos. It combines a new approach to tagging individuals with dynamic person-specific tags, occlusion resolution, and contact re-acquisition. To assure that the tagging is robust to occlusions and partial visibility the tags are built from small pieces of the face surface. To account for the wide and unpredictable ranges of pose and appearance variations and environmental and illumination clutter the tags are continuously and automatically updated by local incremental learning of the object’s background and foreground.
This research was partially supported by: the National Science Foundation, Award # 0916610; two Gifts from the Gerondelis Foundation; the Robert Crooks Stanley Fellowship Fund.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Adam, A., Rivlin, E., Shimshoni, I.: Robust Fragments-based Tracking using the Integral Histogram. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1, pp. 798–805 (2006), 1, 10, 11, 12
Babenko, B., Yang, M.-H., Belongie, S.: Visual Tracking with Online Multiple Instance Learning. In: CVPR (2009), 1, 10, 11, 12
Bradski, G.R.: Computer vision face tracking for use in a perceptual user interface. Intel. Technology Journal (Q2) (1998), 4
Dinh, T.B., Vo, N., Medioni, G.: Context tracker: Exploring supporters and distracters in unconstrained environments. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1177–1184 (2011), 1, 10, 11, 12
Gomez, G., Morales, E.F.: Automatic feature construction and a simple rule induction algorithm for skin detection. In: In Proc. of the ICML Workshop on Machine Learning in Computer Vision, pp. 31–38 (2002), 6
Kamberov, G., Burlick, M., Luczinski, B., Karydas, L., Kamberova, G.: Collaborative track analysis, data cleansing, and labeling. In: Internatioal Symoposium on Visual Computing. LNCS. Springer (2011), 9
Lim, J., Ross, D., Lin, R.-S., Yang, M.-H.: Incremental learning for visual tracking. In: Advances in Neural Information Processing Systems, pp. 793–800 (2005), 1, 10, 11, 12
Saragih, J., Lucey, S., Cohn, J.: Deformable model fitting by regularized landmark Mean-Shift. International Journal of Computer Vision 91(2), 200–215 (2011), 11
Stolkin, R., Florescu, I., Kamberov, G.: An adaptive background model for camshift tracking with a moving camera. In: Proceedings of the 6th International Conference on Advances in Pattern Recognition, pp. 147–151 (2007), 1, 2, 4, 5, 8, 10, 11, 12
Wang, H., Oliensis, J.: Rigid shape matching by segmentation averaging. IEEE Trans. Pattern Anal. Mach. Intell. 32(4), 619–635 (2010), 6, 12
Wolf, L., Hassner, T., Maoz, I.: Face recognition in unconstrained videos with matched background similarity. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 529–534 (June 2011), 10
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2012 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Kamberov, G., Burlick, M., Karydas, L., Koteoglou, O. (2012). SCAR: Dynamic Adaptation for Person Detection and Persistence Analysis in Unconstrained Videos. In: Bebis, G., et al. Advances in Visual Computing. ISVC 2012. Lecture Notes in Computer Science, vol 7432. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-33191-6_18
Download citation
DOI: https://doi.org/10.1007/978-3-642-33191-6_18
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-33190-9
Online ISBN: 978-3-642-33191-6
eBook Packages: Computer ScienceComputer Science (R0)