Nothing Special   »   [go: up one dir, main page]

Skip to main content

Visual Analytics Challenges in Analyzing Calling Context Trees

  • Conference paper
  • First Online:
Programming and Performance Visualization Tools (ESPT 2017, ESPT 2018, VPA 2017, VPA 2018)

Abstract

Performance analysis is an integral part of developing and optimizing parallel applications for high performance computing (HPC) platforms. Hierarchical data from different sources is typically available to identify performance issues or anomalies. Some hierarchical data such as the calling context can be very large in terms of breadth and depth of the hierarchy. Classic tree visualizations quickly reach their limits in analyzing such hierarchies with the abundance of information to display. In this position paper, we identify the challenges commonly faced by the HPC community in visualizing hierarchical performance data, with a focus on calling context trees. Furthermore, we motivate and lay out the bases of a visualization that addresses some of these challenges.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 49.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 64.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Saviankou, P., Knobloch, M., Visser, A., Mohr, B.: Cube v4: from performance report explorer to performance analysis tool. Procedia Comput. Sci. 51, 1343–1352 (2015)

    Article  Google Scholar 

  2. Ammons, G., Ball, T., Larus, J.R.: Exploiting hardware performance counters with flow and context sensitive profiling. In: Proceedings of the ACM SIGPLAN 1997 Conference on Programming Language Design and Implementation, PLDI 1997, pp. 85–96 (1997)

    Google Scholar 

  3. Mey, D., et al.: Score-P: a unified performance measurement system for petascale applications. In: Bischof, C., Hegering, H.G., Nagel, W., Wittum, G. (eds.) Competence in High Performance Computing, pp. 85–97. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-24025-6_8

    Chapter  Google Scholar 

  4. Schulz, M., Galarowicz, J., Hachfeld, W.: Open, speedshop: open source performance analysis for linux clusters. In: Proceedings of the 2006 ACM/IEEE Conference on Supercomputing, SC 2006. ACM, New York (2006)

    Google Scholar 

  5. Adhianto, L., et al.: HPCTOOLKIT: tools for performance analysis of optimized parallel programs. Concurr. Comput. Pract. Exp. 22(6), 685–701 (2010)

    Google Scholar 

  6. Böhme, D., et al.: Caliper: performance introspection for HPC software stacks. In: West, J., Pancake, C.M. (eds.) Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2016, Salt Lake City, UT, USA, 13–18 November 2016, p. 47. ACM (2016)

    Google Scholar 

  7. Alcocer, J.-P.S., Bergel, A., Ducasse, S., Denker, M.: Performance evolution blueprint: understanding the impact of software evolution on performance. In: 2013 First IEEE Working Conference on Software Visualization (VISSOFT), pp. 1–9, September 2013

    Google Scholar 

  8. Blanco, A.F., Alcocer, J.-P.S., Bergel, A.: Effective visualization of object allocation sites. In: Proceedings of 6th IEEE Working Conference on Software Visualization, VISSOFT 2018 (2018)

    Google Scholar 

  9. Szebenyi, Z., Wylie, B.J.N., Wolf, F.: SCALASCA parallel performance analyses of SPEC MPI2007 applications. In: Kounev, S., Gorton, I., Sachs, K. (eds.) SIPEW 2008. LNCS, vol. 5119, pp. 99–123. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-69814-2_8

    Chapter  Google Scholar 

  10. Adhianto, L., et al.: HPCTOOLKIT: tools for performance analysis of optimized parallel programs. Concurr. Comput. Pract. Exp. 22(6), 685–701 (2010). http://hpctoolkit.org

    Google Scholar 

  11. Nguyen, H.T., et al.: VIPACT: a visualization interface for analyzing calling context trees. In: Proceedings of the 3rd Workshop on Visual Performance Analysis, VPA 2016, November 2016

    Google Scholar 

  12. Gralka, P., Schulz, C., Reina, G., Weiskopf, D., Ertl, T.: Visual exploration of memory traces and call stacks. In: 2017 IEEE Working Conference on Software Visualization (VISSOFT), pp. 54–63, September 2017

    Google Scholar 

  13. Gregg, B.: The flame graph. Commun. ACM 59(6), 48–57 (2016)

    Article  Google Scholar 

  14. Tallent, N.R., Mellor-Crummey, J., Franco, M., Landrum, R., Adhianto, L.: Scalable fine-grained call path tracing. In: Proceedings of the International Conference on Supercomputing, ICS 2011, pp. 63–74. ACM, New York (2011)

    Google Scholar 

  15. Nagel, W.E., Arnold, A., Weber, M., Hoppe, H.-C., Solchenbach, K.: VAMPIR: visualization and analysis of MPI resources. Supercomputer 12, 69–80 (1996)

    Google Scholar 

  16. Kruskal, J.B., Landwehr, J.M.: Icicle Plots: better displays for hierarchical clustering. Am. Stat. 37(2), 162–168 (1983)

    Google Scholar 

  17. Trümper, J., Telea, A., Döllner, J.: ViewFusion: correlating structure and activity views for execution traces. In: Proceedings Theory and Practice of Computer Graphics, pp. 45–52 (2012)

    Google Scholar 

  18. De Pauw, W., Jensen, E., Mitchell, N., Sevitsky, G., Vlissides, J., Yang, J.: Visualizing the execution of Java programs. In: Diehl, S. (ed.) Software Visualization. LNCS, vol. 2269, pp. 151–162. Springer, Heidelberg (2002). https://doi.org/10.1007/3-540-45875-1_12

    Chapter  Google Scholar 

  19. De Pauw, W., Heisig, S.: Visual and algorithmic tooling for system trace analysis: a case study. ACM SIGOPS Oper. Syst. Rev. 44(1), 97–102 (2010)

    Article  Google Scholar 

  20. De Pauw, W., Heisig, S.: Zinsight: a visual and analytic environment for exploring large event traces. In: Proceedings of the 5th International Symposium on Software Visualization, SOFTVIS, pp. 143–152. ACM, New York (2010)

    Google Scholar 

  21. Adamoli, A., Hauswirth M.: Trevis: a context tree visualization & analysis framework and its use for classifying performance failure reports. In: Proceedings of the 5th International Symposium on Software Visualization, SOFTVIS, pp. 73–82. ACM, New York (2010)

    Google Scholar 

  22. Moret, P., Binder, W., Villazón, A., Ansaloni, D., Heydarnoori, A.: Visualizing and exploring profiles with calling context ring charts. Softw. Pract. Exp. 40(9), 825–847 (2010)

    Google Scholar 

  23. Böhme, D., Geimer, M., Wolf, F., Arnold, L.: Scalasca analysis report for SPEC MPI.2007 benchmark 132.zeump2 on 512 processes in virtual- node mode on Blue Gene/P, April 2018. https://doi.org/10.5281/zenodo.1211448

  24. Müller, M.S., et al.: SPEC MPI2007-an application benchmark suite for parallel systems using MPI. Concurr. Comput. Pract. Exp. 22(2), 191–205 (2010)

    Google Scholar 

  25. Attig, N., et al.: Blue Gene/P: JUGENE. Computational Science Series, pp. 153–188. CRC Press, Taylor & Francis Group, Boca Raton (2013)

    Google Scholar 

  26. Lanza, M., Ducasse, S.: Polymetric views–a lightweight visual approach to reverse engineering. Trans. Softw. Eng. (TSE) 29(9), 782–795 (2003)

    Article  Google Scholar 

  27. Wylie, B.J.N., Geimer, M., Mohr, B., Böhme, D., Szebenyi, Z., Wolf, F.: Scalasca analysis report of the ASCI Sweep3D benchmark on 294,912 processes in virtual-node mode on IBM Blue Gene/P with manually annotated iterations, August 2018

    Google Scholar 

  28. Los Alamos National Laboratory: ASCI SWEEP3D v2.2b: 3-dimensional discrete ordinates neutron transport benchmark (1995). http://wwwc3.lanl.gov/pal/software/sweep3d/

  29. Keim, D.A., Mansmann, F., Schneidewind, J., Thomas, J., Ziegler, H.: Visual analytics: scope and challenges. In: Simoff, S.J., Böhlen, M.H., Mazeika, A. (eds.) Visual Data Mining. LNCS, vol. 4404, pp. 76–90. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-71080-6_6

    Chapter  Google Scholar 

  30. Keim, D., Andrienko, G., Fekete, J.-D., Görg, C., Kohlhammer, J., Melançon, G.: Visual analytics: definition, process, and challenges. In: Kerren, A., Stasko, J.T., Fekete, J.-D., North, C. (eds.) Information Visualization. LNCS, vol. 4950, pp. 154–175. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-70956-5_7

    Chapter  Google Scholar 

  31. Crespo, A.J.C., Rogers, B., Dominguez, J.M., Gomez-Gesteira, M.: Simulating more than 1 billion SPH particles using GPU hardware acceleration, pp. 249–254 (2013)

    Google Scholar 

  32. Griffin, K., Raskin, C.: Scalable rendering of large SPH simulations using an RK-enhanced interpolation scheme on constrained datasets. In: 2016 IEEE 6th Symposium on Large Data Analysis and Visualization (LDAV), pp. 95–96. IEEE (2016)

    Google Scholar 

  33. Shneiderman, B.: The eyes have it: a task by data type taxonomy for information visualizations. In: IEEE Visual Languages, College Park, Maryland, USA, pp. 336–343 (1996)

    Google Scholar 

  34. Bergel, A.: Agile Visualization. LULU Press, Morrisville (2016)

    Google Scholar 

  35. Woodside, M., Franks, G., Petriu, D.C.: The future of software performance engineering. In: Future of Software Engineering, FOSE 2007, pp. 171–187 (2007)

    Google Scholar 

Download references

Acknowledgment

The ideas presented in this paper originated during the GI-Dagstuhl Seminar 18283, sponsored by the Gesellschaft für Informatik e.V. (GI), where all the authors on this paper were participants. The first author would like to thank LAM Research for its financial support.

This work was partially funded by the Deutsche Forschungsgemeinschaft (DFG) in context of SFB 716, project D.3, as well as the Priority Programme “DFG-SPP 1593: Design For Future—Managed Software Evolution” (HO 5721/1-1), and by the Excellence Initiative of the German federal and state governments. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 (LLNL-CONF-756548).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Abhinav Bhatele .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 This is a U.S. government work and not under copyright protection in the United States; foreign copyright protection may apply

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bergel, A. et al. (2019). Visual Analytics Challenges in Analyzing Calling Context Trees. In: Bhatele, A., Boehme, D., Levine, J., Malony, A., Schulz, M. (eds) Programming and Performance Visualization Tools. ESPT ESPT VPA VPA 2017 2018 2017 2018. Lecture Notes in Computer Science(), vol 11027. Springer, Cham. https://doi.org/10.1007/978-3-030-17872-7_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-17872-7_14

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-17871-0

  • Online ISBN: 978-3-030-17872-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics