Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/2642937.2642973acmconferencesArticle/Chapter ViewAbstractPublication PagesaseConference Proceedingsconference-collections
research-article

Symbolic state validation through runtime data

Published: 15 September 2014 Publication History

Abstract

Real world programs are typically built on top of many library functions. Symbolic analysis of these programs generally requires precise models of these functions? Application Programming Interfaces (APIs), which are mostly unavailable because these models are costly to construct. A variant approach of symbolic analysis is to over-approximate the return values of those APIs that have not been modeled. However, such approximation can induce many unreachable symbolic states, which are expensive to validate manually. In this paper, we propose a static approach to automatically validating the reported anomalous symbolic states. The validation makes use of the available runtime data of the un-modeled APIs collected from previous program executions. We show that the symbolic state validation problem can be cast as a MAX-SAT problem and solved by existing constraint solvers.
Our approach is motivated by two observations. We may bind the symbolic parameters in un-modeled APIs based on observations made in former executions by other programs. The binding enables us to use the corresponding observed concrete return values of APIs to validate the symbolic states arising from the over-approximated return values of the un-modeled APIs. Second, some symbolic constraints can be accurately evaluated despite the imprecision of the over-approximated symbolic values.
Our technique found 80 unreported bugs when it was applied to 10 popular programs with a total of 1.5 million lines of code. All of them can be confirmed by test cases. Our technique presents a promising way to apply the big data paradigm to software engineering. It provides a mechanism to validate the symbolic states of a project by leveraging the many concrete input-output values of APIs collected from other projects.

References

[1]
Enderton, H. B., A Mathematical Introduction to Logic.: Academic Press, 2001.
[2]
Ammons, G., Bodík, R., and Larus, J. R., "Mining specifications," Proc. of POPL '02, ACM, 2002.
[3]
Blackshear, S and Lahiri, S. K., "Almost-correct specifications: a modular semantic framework for assigning confidence to warnings," Proc. of conf. on PLDI '13, ACM, 2013.
[4]
Cadar, C., Dunbar, D., and Engler, D., "KLEE: unassisted and automatic generation of high-coverage tests for complex systems programs," Proc. of OSDI'08, USENIX Association, 2008, pp. 209--224.
[5]
Chandra, S., Fink, S. J., and Sridharan, M., "Snugglebug: a powerful approach to weakest preconditions," Proc. of PLDI '09, ACM, 2009, pp. 363--374.
[6]
Chipounov, V., Kuznetsov, V., and Candea, G., "The S2E Platform: Design, Implementation, and Applications," ACM Trans. Comput. Syst., vol. 30, no. 1, p. 49, 2012.
[7]
Csallner, C., Tillmann, N., and Smaragdakis, Y., "DySy: dynamic symbolic execution for invariant inference," Proc. of the 30th int. conf. on ICSE '08, ACM, 2008.
[8]
Dillig, I., Dillig, T., and Aiken, A., "Automated error diagnosis using abductive inference," Proc. of conf. on PLDI '12, ACM, 2012.
[9]
Dillig, I., Dillig, T., and Aiken, A., "Sound, complete and scalable path-sensitive analysis," Proc. of PLDI '08, ACM, 2008.
[10]
Engler, D., Chen, D. Y., Hallem, S., Chou, A., and Chelf, B., "Bugs as deviant behavior: a general approach to inferring errors in systems code," Proc. of SOSP '01, ACM, 2011.
[11]
Engler, Dawson and Dunbar, Daniel, "Under-constrained execution: making automatic code destruction easy and scalable," Proc. of ISSTA '07, 2007.
[12]
Ernst, M. D. et al., "The Daikon system for dynamic detection of likely invariants," Sci. Comput. Program., pp. 35--45, 2007.
[13]
Gabel, M. and Su, Z., "Javert: fully automatic mining of general temporal properties from dynamic traces," Proc. of FSE-16, ACM, 2008.
[14]
Gabel, M. and Su, Z., "Symbolic mining of temporal specifications," Proc. of ICSE '08, ACM, 2008.
[15]
Godefroid, P., "Compositional dynamic test generation," Proc. of POPL '07, ACM, 2007, pp. 47--54.
[16]
Godefroid, P., "Higher-order test generation," Proc. of PLDI' 11, 2011.
[17]
Godefroid, P., Klarlund, N., and Sen, K., "DART: Directed Automated Random Testing," Proc. of PLDI 05, ACM, 2005, pp. 213--223.
[18]
http://docs.oracle.com/javase/6/docs/api/java/lang/instrument/package-summary.html Oracle.
[19]
http://golang.org/ golang.
[20]
http://sccpu2.cse.ust.hk/yueqili/validation/.
[21]
http://www.csg.ci.i.u-tokyo.ac.jp/~chiba/javassist/ javassist.
[22]
http://www.dacapobench.org/Dacapo.
[23]
http://www.emn.fr/z-info/choco-solver/ choco-solve.
[24]
http://www.sqlite.org/ sqlite.
[25]
Islam, M. and Csallner, C., "Dsc+Mock: a test case + mock class generator in support of coding against interfaces," Proc. of WODA '10, 2010.
[26]
Khurshid, S., Pasareanu, C S., and Visser, V., "Generalized symbolic execution for model checking and testing," Proc. of TACAS'03, Springer-Verlag, 2003.
[27]
Le, W., "Segmented symbolic analysis," Proc. of ICSE' 13, ACM, 2013.
[28]
Li, Yueqi, Cheung, S.C., Zhang, Xiangyu, and Liu, Yepang, "Scaling Up Symbolic Analysis by Removing Z-Equivalent States," To appear in ACM Transactions on Software Engineering and Methodology, http://www.cse.ust.hk/~scc/ScalingSymbolicAnalysis.pdf.
[29]
Nethercote, N. and Seward, J., "Valgrind: a framework for heavyweight dynamic binary instrumentation," Proc. of PLDI '07, 2007.
[30]
Pasareanu, C. S. et al., "Combining unit-level symbolic execution and system-level concrete execution for testing nasa software," Proc. of ISSTA '08, ACM, 2008.
[31]
Person, S., Yang, G., Rungta, N., and Khurshid, S., "Directed incremental symbolic execution," Proc. of PLDI '11, ACM, 2011, pp. 504--515.
[32]
Qi, D. et al., "Modeling Software Execution Environment," Proc. of WCRE '12, ACM, 2012.
[33]
Sen, K. and Agha, G., "CUTE and jCUTE: concolic unit testing and explicit path model-checking tools," Proc. of CAV'06, 2006.
[34]
Sen, K., Marinov, D., and Agha, G., "CUTE: a concolic unit testing engine for C," Proc. of ESEC/FSE 05, ACM, 2005, pp. 263--272.
[35]
Song, D. et al., "BitBlaze: A New Approach to Computer Security via Binary Analysis," Proc. of ICISS '08, 2008.
[36]
Tillmann, Nikolai and Schulte, Wolfram, "Mock-object generation with behavior," Proc. of ASE '06, 2006, pp. 365--368.
[37]
Tripp, O., Pistoia, M., Fink, S. J., Sridharan, M., and Weisman, O., "TAJ: effective taint analysis of web applications," Proc. of PLDI '09, ACM, 2009.
[38]
Wei, Y., Meyer, B., and Oriol, M., "Is branch coverage a good measure of testing effectiveness?," Proc. of Empirical Software Engineering and Verification, 2012.
[39]
Xiao, X., Xie, T., Tillmann, N., and Halleux, J., "Precise identification of problems for structural test generation," Proc. of ICSE '11, 2011.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
ASE '14: Proceedings of the 29th ACM/IEEE International Conference on Automated Software Engineering
September 2014
934 pages
ISBN:9781450330138
DOI:10.1145/2642937
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 15 September 2014

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. api modeling
  2. dynamic analysis
  3. symbolic analysis
  4. warning validation

Qualifiers

  • Research-article

Funding Sources

Conference

ASE '14
Sponsor:

Acceptance Rates

ASE '14 Paper Acceptance Rate 82 of 337 submissions, 24%;
Overall Acceptance Rate 82 of 337 submissions, 24%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 258
    Total Downloads
  • Downloads (Last 12 months)1
  • Downloads (Last 6 weeks)0
Reflects downloads up to 19 Nov 2024

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media