Nothing Special   »   [go: up one dir, main page]

Skip to main content

Advertisement

Log in

A Container-Based Elastic Cloud Architecture for Pseudo Real-Time Exploitation of Wide Area Motion Imagery (WAMI) Stream

  • Published:
Journal of Signal Processing Systems Aims and scope Submit manuscript

Abstract

Real-time information fusion based on WAMI (Wide-Area Motion Imagery), FMV (Full Motion Video), and text data is highly desired for many mission critical emergency or military applications. However, due to the huge data rate, it is still infeasible to process streaming WAMI in a real-time manner and achieve the goal of online, uninterrupted target tracking. In this paper, a pseudo-real-time Dynamic Data Driven Applications System (DDDAS) WAMI data stream processing scheme is proposed. Taking advantage of the temporal and spatial locality properties, a divide-and-conquer strategy is adopted to overcome the challenge resulting from the large amount of dynamic data. In the Pseudo Real-time Exploitation of Sub-Area (PRESA) framework, each WAMI frame is divided into multiple sub-areas and specified sub-areas are assigned to the virtual machines in a container-based cloud computing architecture, which allows dynamic resource provisioning to meet the performance requirements. A prototype has been implemented and the experimental results validate the effectiveness of our approach.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Figure 1
Figure 2
Figure 3
Figure 4
Figure 5
Figure 6
Figure 7
Figure 8
Figure 9
Figure 10
Figure 11
Figure 12

Similar content being viewed by others

References

  1. Blasch E., Bosse E., Lambert D. A. (2012). High-level information fusion management and systems design, Artech House, Norwood, MA.

  2. Blasch E., Steinberg A., Das S., Llinas J., Chong C.-Y., Kessler O., Waltz E., White F. (2013). Revisiting the JDL model for information exploitation, Int’l Conf. on Info Fusion.

  3. O. Mendoza-Schrock, Patrick, J. A., et al. (2009). Video image registration evaluation for a layered sensing environment, Proc. IEEE Nat. Aerospace Electronics Conf. (NAECON).

  4. Blasch E., Yang C., Kadar I. (2014). Summary of tracking and identification methods, Proc. SPIE, Vol. 9119.

  5. Porter R., Ruggiero C., Morrison J. D. (2009). A framework for activity detection in wide-area motion imagery, Proc. SPIE, Vol. 7341.

  6. Porter, R., Fraser, A. M., & Hush, D. (September 2010). Wide-area motion imagery: narrowing the semantic gap. IEEE Signal Processing Magazine, 27(5), 56–65.

    Article  Google Scholar 

  7. Blasch E., Seetharaman G., Suddarth S., Palaniappan K., Chen G., Ling H., Basharat A. (2014). Summary of methods in wide-area motion imagery (WAMI), Proc. SPIE, Vol. 9089.

  8. Asari V. K. (ed.) (2014). Wide area surveillance: real-time motion detection systems, section augmented vision and reality, Vol. 6, Springer. http://link.springer.com/book/10.1007%2F978-3-642-37841-6.

  9. Kahler B. and Blasch E. (2008). Sensor management fusion using operating conditions, Proc. IEEE Nat. Aerospace Electronics Conf (NAECON).

  10. Blasch E., Steinberg A., Das S., Llinas J., Chong C.-Y., Kessler O., Waltz E., and White F. (2013). Revisiting the JDL model for information exploitation, Int’l Conf. on Info Fusion.

  11. Sun Z. H., Leotta M., Hoogs A. J., Blue R., et al. (2014). Vehicle change detection from aerial imagery using detection response maps, Proc. SPIE, Vol. 9089.

  12. Darema F. (2000). DDDAS workshop groups. Creating a dynamic and symbiotic coupling of application/simulations with measurements/experiments. NSF DDDAS 2000 Workshop. Available via www.1dddas.org [accessed Jan 2015].

  13. Darema, F. (2005). Grid computing and beyond: the context of dynamic data driven applications systems. Proceedings IEEE, 93(3), 692–697.

    Article  Google Scholar 

  14. Blasch, E., Seetharaman, G., & Reinhardt, K. (2013). Dynamic data driven applications system concept for information fusion. Procedia Computer Science, 18, 1999–2007.

    Article  Google Scholar 

  15. Blasch E., Seetharaman G., Darema F. (2013). Dynamic data driven applications systems (dddas) modeling for automatic target recognition, Proc. SPIE, Vol. 8744.

  16. Ravela, S. (2012). Quantifying uncertainty for coherent structures. Procedia Computer Science, 9, 1187–1196.

    Article  Google Scholar 

  17. Liu, B., Blasch, E., Chen, Y., Hadiks, A., Shen, D., Chen, G., & Aved, A. J. (2014). Information fusion in a cloud computing era: a systems-level perspective. Aerospace and Electronic Systems Magazine, IEEE, 19(10), 16–24.

    Article  Google Scholar 

  18. Liu K., Liu B., Blasch E., Shen D., Wang Z., Ling H., Chen G. (2015). A cloud infrastructure for target detection and tracking using audio and video fusion, IEEE CVPR Workshop.

  19. Wu R., Chen Y., Blasch E., Liu B., Chen G., and Shen D. (2014). A container-based elastic cloud architecture for real-time full-motion video (FMV) target tracking, Applied Imagery Pattern Recognition Workshop (AIPR) IEEE, vol., no., pp. 1,8, 14–16 Oct. 2014

  20. Pritt M. D., LaTourette K. J. (2011). Automated georegistration of motion imagery, Applied Imagery Pattern Recognition.

  21. Wu Y., Chen G., Blasch E., Ling H. (2012). Feature based background registration in wide area motion imagery, Proc. SPIE, Vol. 8402.

  22. Palaniappan K., Bunyak F., Kumar P., Ersoy I., Jeager S., Ganguli K., Haridas A., Fraser J., Rao R. M., and Seetharaman G., (2010). Efficient feature extraction and likelihood fusion for vehicle tracking in low frame rate airborne video, Intl. Conf. on Information Fusion.

  23. Perera, A. G. A., Collins, R., & Hoods, A. (2008). Evaluation of compression schemes for wide area video. IEEE Applied Imagery Pattern Recognition Workshop.

  24. Irvine J. M., Israel S. A. (2012). Quantifying interpretability loss due to image compression, Ch. 3 in Video Compression, A. Punchihewa (Ed.), InTech.

  25. Blasch E., Seetharaman G., Russell S. (2011).Wide-Area Video Exploitation (WAVE) Joint data management (JDM) for layered sensing, Proc. SPIE, Vol. 8050.

  26. Blasch E., Russell S., Seetharaman G. (2011). Joint data management for MOVINT data-to-decision making, Int. Conf. on Info Fusion.

  27. Ling H., Wu Y., Blasch E., Chen G., Bai L. (2011). Evaluation of visual tracking in extremely low frame rate wide area motion imagery, Int. Conf. on Info Fusion.

  28. Wu Y., Ling H., Blasch E., Chen G., Bai L. (2011) .Visual tracking based on log-Euclidean Riemannian sparse representation, Int. Symp. on Adv. in Visual Computing - Lecture Notes in Computer Science.

  29. Liang P., Teodoro G., Ling H., Blasch E., Chen G., Bai L. (2012). Multiple kernel learning for vehicle detection in wide area motion imagery, Int. Conf. on Info Fusion.

  30. Palaniappan E K., Seetharaman G., Rao R. M. (2012). Interactive target tracking for persistent wide-area surveillance, Proc. SPIE, Vol. 8396.

  31. Mathew, V., & Asari, K. (2012). Local histogram based descriptor for tracking in wide area imagery. Wireless Networks and Computational Intelligence Comm. in Computer and Information Science, 292(2012), 119–128.

    Article  Google Scholar 

  32. Prokaj J., Zhao X., Medioni G. (2012). Tracking many vehicles in wide area aerial surveillance, IEEE Conf. on Computer Vision and Pattern Recognition Workshop (CVPRW).

  33. Liu, K., Du, Q., Yang, H., & Ma, B. (2010). Optical flow and principal component analysis-based motion detection in outdoor videos. EURASIP Journal on Advances in Signal Processing, 2010, 1.

    Google Scholar 

  34. Liu K., Yang H., Ma B., and Du Q. (2010). A joint optical flow and principal component analysis approach for motion detection. In Acoustics Speech and Signal Processing (ICASSP), 2010 I.E. International Conference on, pp. 1178–1181. IEEE.

  35. Choi J., Dumortier Y., Prokaj J., Medioni, G. (2012). Activity recognition in wide aerial video surveillance using entity relationship models, 2012. In International Conference on Advances in GIS, SIGSPATIAL, pages 466–469.

  36. Shi X., Ling H., Blasch E., Hu W. (2012). Context-driven moving vehicle detection in wide area motion imagery, Int’l Conf. on Pattern Recognition (ICPR).

  37. Blasch E., Seetharaman G., Palaniappan K., Ling H., Chen G. (2012). Wide-area motion imagery (WAMI) exploitation tools for enhanced situation awareness, IEEE Applied Imagery Pattern Recognition Workshop.

  38. Liang P., Shen D., Blasch E., Pham K., Wang Z., Chen G., Ling H. (2013). Spatial context for moving vehicle detection in wide area motion imagery with multiple kernel learning. Proc. SPIE, Vol. 8751.

  39. Liang P., Ling H., Blasch E., Seetharaman G., Shen D., Chen G. (2013). Vehicle detection in wide area aerial surveillance using temporal context, Int’l Conf. on Info Fusion.

  40. Santhaseelan V., Asari V. K. (2013). Tracking in wide area motion imagery using phase vector fields, IEEE Conf. on Computer Vision and Pattern Recognition Workshop (CVPRW).

  41. Gao J., Ling H., Blasch E., Pham K., Wang Z., Chen G. (2013) Pattern of life from WAMI objects tracking based on visual context-aware tracking and infusion network models, Proc. SPIE, Vol. 8745.

  42. Shi X., Li P., Hu W. et al., (2013). Using maximum consistency context for multiple target Association in Wide Area Traffic Scenes, Int’l Conf. on Acoustics, Speech and Signal Processing (ICASSP).

  43. Pang Y., Shen D., Chen G., Liang P., et al. (2013). Low frame rate video target localization and tracking testbed, Proc. SPIE, Vol. 8742.

  44. Basharat, A, Turek, M., Xu, Y., Atkins, C., Stoup, D., Fieldhouse, K., Tunison, P., Hoogs, A. (2014). Real-time multi-target tracking at 210 megapixels/second in wide area motion imagery, IEEE Winter Conf. on Apps. of Computer Vision (WACV).

  45. Lowe, D. G. (2004). Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2), 91–110.

    Article  Google Scholar 

  46. Fischler, M. A., & Bolles, R. C. (1981). Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6), 381–395.

    Article  MathSciNet  Google Scholar 

  47. B. Liu, Y. Chen, D. Shen, G. Chen, K. Pham, E. Blasch, and B. Rubin, “An adaptive process-based cloud infrastructure for space situational awareness applications,” in Proc. SPIE, vol. 9085, 2014.

  48. SWSoft, “Openvz server virtualization,” http://www.openvz.org/, 2006.

  49. Columbus Large Image Format (CLIF). (2006). Dataset. https://www.sdms.afrl.af.mil/index.php? collection = clif2006.

  50. Hytla P. C., Jackovitz K.S., Balster E.J., Vasquez J. R., Talbert M. L. (2012) Detection and tracking performance with compressed wide area motion imagery, IEEE Nat. Aerospace and Electronics Conference.

  51. Liu, K., Ma, B., Du, Q., & Chen, G. (2012). Fast motion detection from airborne videos using graphics processing unit. Journal of Applied Remote Sensing, 6(1), 061505–061501.

    Article  Google Scholar 

Download references

Acknowledgements

This work is supported by the US Air Force Research Laboratory (AFRL) Visiting Faculty Research Program (VFRP) and the grant from AFOSR in Dynamic Data-Driven Application Systems. Ryan Wu was a summer undergraduate AFRL research fellow.

The authors also want to express our gratitude to Dr. Erkang Cheng for his valuable suggestions and discussions on SIFT data set and algorithms.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yu Chen.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wu, R., Liu, B., Chen, Y. et al. A Container-Based Elastic Cloud Architecture for Pseudo Real-Time Exploitation of Wide Area Motion Imagery (WAMI) Stream. J Sign Process Syst 88, 219–231 (2017). https://doi.org/10.1007/s11265-016-1206-6

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11265-016-1206-6

Keywords

Navigation